00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 979 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3641 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.073 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.074 The recommended git tool is: git 00:00:00.074 using credential 00000000-0000-0000-0000-000000000002 00:00:00.081 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.107 Fetching changes from the remote Git repository 00:00:00.110 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.151 Using shallow fetch with depth 1 00:00:00.151 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.151 > git --version # timeout=10 00:00:00.215 > git --version # 'git version 2.39.2' 00:00:00.215 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.246 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.246 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.811 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.823 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.836 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:04.836 > git config core.sparsecheckout # timeout=10 00:00:04.847 > git read-tree -mu HEAD # timeout=10 00:00:04.861 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:04.881 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:04.881 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:04.971 [Pipeline] Start of Pipeline 00:00:04.984 [Pipeline] library 00:00:04.985 Loading library shm_lib@master 00:00:04.986 Library shm_lib@master is cached. Copying from home. 00:00:04.999 [Pipeline] node 00:00:05.014 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.015 [Pipeline] { 00:00:05.027 [Pipeline] catchError 00:00:05.030 [Pipeline] { 00:00:05.044 [Pipeline] wrap 00:00:05.054 [Pipeline] { 00:00:05.062 [Pipeline] stage 00:00:05.064 [Pipeline] { (Prologue) 00:00:05.271 [Pipeline] sh 00:00:05.550 + logger -p user.info -t JENKINS-CI 00:00:05.565 [Pipeline] echo 00:00:05.567 Node: GP11 00:00:05.572 [Pipeline] sh 00:00:05.865 [Pipeline] setCustomBuildProperty 00:00:05.873 [Pipeline] echo 00:00:05.874 Cleanup processes 00:00:05.879 [Pipeline] sh 00:00:06.160 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.160 497950 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.172 [Pipeline] sh 00:00:06.452 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.452 ++ grep -v 'sudo pgrep' 00:00:06.452 ++ awk '{print $1}' 00:00:06.452 + sudo kill -9 00:00:06.452 + true 00:00:06.465 [Pipeline] cleanWs 00:00:06.474 [WS-CLEANUP] Deleting project workspace... 00:00:06.474 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.482 [WS-CLEANUP] done 00:00:06.485 [Pipeline] setCustomBuildProperty 00:00:06.496 [Pipeline] sh 00:00:06.774 + sudo git config --global --replace-all safe.directory '*' 00:00:06.864 [Pipeline] httpRequest 00:00:07.481 [Pipeline] echo 00:00:07.482 Sorcerer 10.211.164.20 is alive 00:00:07.488 [Pipeline] retry 00:00:07.489 [Pipeline] { 00:00:07.497 [Pipeline] httpRequest 00:00:07.500 HttpMethod: GET 00:00:07.501 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.503 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.524 Response Code: HTTP/1.1 200 OK 00:00:07.524 Success: Status code 200 is in the accepted range: 200,404 00:00:07.524 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:13.519 [Pipeline] } 00:00:13.539 [Pipeline] // retry 00:00:13.547 [Pipeline] sh 00:00:13.836 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:13.853 [Pipeline] httpRequest 00:00:14.459 [Pipeline] echo 00:00:14.461 Sorcerer 10.211.164.20 is alive 00:00:14.470 [Pipeline] retry 00:00:14.472 [Pipeline] { 00:00:14.485 [Pipeline] httpRequest 00:00:14.490 HttpMethod: GET 00:00:14.491 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:14.491 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:14.519 Response Code: HTTP/1.1 200 OK 00:00:14.520 Success: Status code 200 is in the accepted range: 200,404 00:00:14.520 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:55.462 [Pipeline] } 00:01:55.480 [Pipeline] // retry 00:01:55.488 [Pipeline] sh 00:01:55.776 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:59.079 [Pipeline] sh 00:01:59.366 + git -C spdk log --oneline -n5 00:01:59.366 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:59.366 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:59.366 4bcab9fb9 correct kick for CQ full case 00:01:59.366 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:59.366 318515b44 nvme/perf: interrupt mode support for pcie controller 00:01:59.386 [Pipeline] withCredentials 00:01:59.397 > git --version # timeout=10 00:01:59.410 > git --version # 'git version 2.39.2' 00:01:59.429 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:59.431 [Pipeline] { 00:01:59.441 [Pipeline] retry 00:01:59.443 [Pipeline] { 00:01:59.459 [Pipeline] sh 00:01:59.746 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:59.759 [Pipeline] } 00:01:59.776 [Pipeline] // retry 00:01:59.782 [Pipeline] } 00:01:59.800 [Pipeline] // withCredentials 00:01:59.810 [Pipeline] httpRequest 00:02:00.251 [Pipeline] echo 00:02:00.253 Sorcerer 10.211.164.20 is alive 00:02:00.263 [Pipeline] retry 00:02:00.265 [Pipeline] { 00:02:00.279 [Pipeline] httpRequest 00:02:00.284 HttpMethod: GET 00:02:00.284 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:00.286 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:00.290 Response Code: HTTP/1.1 200 OK 00:02:00.290 Success: Status code 200 is in the accepted range: 200,404 00:02:00.291 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:02.274 [Pipeline] } 00:02:02.292 [Pipeline] // retry 00:02:02.299 [Pipeline] sh 00:02:02.586 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:04.502 [Pipeline] sh 00:02:04.790 + git -C dpdk log --oneline -n5 00:02:04.790 eeb0605f11 version: 23.11.0 00:02:04.790 238778122a doc: update release notes for 23.11 00:02:04.790 46aa6b3cfc doc: fix description of RSS features 00:02:04.790 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:04.790 7e421ae345 devtools: support skipping forbid rule check 00:02:04.801 [Pipeline] } 00:02:04.817 [Pipeline] // stage 00:02:04.828 [Pipeline] stage 00:02:04.831 [Pipeline] { (Prepare) 00:02:04.852 [Pipeline] writeFile 00:02:04.870 [Pipeline] sh 00:02:05.156 + logger -p user.info -t JENKINS-CI 00:02:05.169 [Pipeline] sh 00:02:05.456 + logger -p user.info -t JENKINS-CI 00:02:05.471 [Pipeline] sh 00:02:05.762 + cat autorun-spdk.conf 00:02:05.762 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.762 SPDK_TEST_NVMF=1 00:02:05.762 SPDK_TEST_NVME_CLI=1 00:02:05.762 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.762 SPDK_TEST_NVMF_NICS=e810 00:02:05.762 SPDK_TEST_VFIOUSER=1 00:02:05.762 SPDK_RUN_UBSAN=1 00:02:05.762 NET_TYPE=phy 00:02:05.762 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.762 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:05.771 RUN_NIGHTLY=1 00:02:05.775 [Pipeline] readFile 00:02:05.802 [Pipeline] withEnv 00:02:05.804 [Pipeline] { 00:02:05.816 [Pipeline] sh 00:02:06.104 + set -ex 00:02:06.104 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:06.104 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:06.104 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.104 ++ SPDK_TEST_NVMF=1 00:02:06.104 ++ SPDK_TEST_NVME_CLI=1 00:02:06.104 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:06.104 ++ SPDK_TEST_NVMF_NICS=e810 00:02:06.104 ++ SPDK_TEST_VFIOUSER=1 00:02:06.104 ++ SPDK_RUN_UBSAN=1 00:02:06.104 ++ NET_TYPE=phy 00:02:06.104 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:06.104 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:06.104 ++ RUN_NIGHTLY=1 00:02:06.104 + case $SPDK_TEST_NVMF_NICS in 00:02:06.104 + DRIVERS=ice 00:02:06.104 + [[ tcp == \r\d\m\a ]] 00:02:06.104 + [[ -n ice ]] 00:02:06.104 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:06.104 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:06.104 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:06.104 rmmod: ERROR: Module irdma is not currently loaded 00:02:06.104 rmmod: ERROR: Module i40iw is not currently loaded 00:02:06.104 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:06.104 + true 00:02:06.104 + for D in $DRIVERS 00:02:06.104 + sudo modprobe ice 00:02:06.104 + exit 0 00:02:06.114 [Pipeline] } 00:02:06.130 [Pipeline] // withEnv 00:02:06.135 [Pipeline] } 00:02:06.148 [Pipeline] // stage 00:02:06.157 [Pipeline] catchError 00:02:06.159 [Pipeline] { 00:02:06.173 [Pipeline] timeout 00:02:06.173 Timeout set to expire in 1 hr 0 min 00:02:06.175 [Pipeline] { 00:02:06.189 [Pipeline] stage 00:02:06.191 [Pipeline] { (Tests) 00:02:06.205 [Pipeline] sh 00:02:06.491 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:06.491 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:06.491 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:06.491 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:06.491 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:06.491 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:06.491 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:06.491 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:06.491 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:06.491 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:06.491 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:06.491 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:06.491 + source /etc/os-release 00:02:06.491 ++ NAME='Fedora Linux' 00:02:06.491 ++ VERSION='39 (Cloud Edition)' 00:02:06.491 ++ ID=fedora 00:02:06.491 ++ VERSION_ID=39 00:02:06.491 ++ VERSION_CODENAME= 00:02:06.491 ++ PLATFORM_ID=platform:f39 00:02:06.491 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:06.491 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:06.491 ++ LOGO=fedora-logo-icon 00:02:06.491 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:06.491 ++ HOME_URL=https://fedoraproject.org/ 00:02:06.491 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:06.491 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:06.491 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:06.491 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:06.491 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:06.491 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:06.491 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:06.491 ++ SUPPORT_END=2024-11-12 00:02:06.491 ++ VARIANT='Cloud Edition' 00:02:06.491 ++ VARIANT_ID=cloud 00:02:06.491 + uname -a 00:02:06.491 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:06.491 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:07.427 Hugepages 00:02:07.427 node hugesize free / total 00:02:07.427 node0 1048576kB 0 / 0 00:02:07.427 node0 2048kB 0 / 0 00:02:07.427 node1 1048576kB 0 / 0 00:02:07.427 node1 2048kB 0 / 0 00:02:07.427 00:02:07.427 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:07.427 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:07.427 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:07.427 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:07.686 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:07.686 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:07.686 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:07.686 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:07.686 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:07.686 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:07.686 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:07.686 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:07.686 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:07.686 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:07.686 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:07.686 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:07.686 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:07.686 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:07.686 + rm -f /tmp/spdk-ld-path 00:02:07.686 + source autorun-spdk.conf 00:02:07.686 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.686 ++ SPDK_TEST_NVMF=1 00:02:07.686 ++ SPDK_TEST_NVME_CLI=1 00:02:07.686 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:07.686 ++ SPDK_TEST_NVMF_NICS=e810 00:02:07.687 ++ SPDK_TEST_VFIOUSER=1 00:02:07.687 ++ SPDK_RUN_UBSAN=1 00:02:07.687 ++ NET_TYPE=phy 00:02:07.687 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:07.687 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:07.687 ++ RUN_NIGHTLY=1 00:02:07.687 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:07.687 + [[ -n '' ]] 00:02:07.687 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.687 + for M in /var/spdk/build-*-manifest.txt 00:02:07.687 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:07.687 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:07.687 + for M in /var/spdk/build-*-manifest.txt 00:02:07.687 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:07.687 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:07.687 + for M in /var/spdk/build-*-manifest.txt 00:02:07.687 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:07.687 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:07.687 ++ uname 00:02:07.687 + [[ Linux == \L\i\n\u\x ]] 00:02:07.687 + sudo dmesg -T 00:02:07.687 + sudo dmesg --clear 00:02:07.687 + dmesg_pid=498710 00:02:07.687 + [[ Fedora Linux == FreeBSD ]] 00:02:07.687 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.687 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.687 + sudo dmesg -Tw 00:02:07.687 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:07.687 + [[ -x /usr/src/fio-static/fio ]] 00:02:07.687 + export FIO_BIN=/usr/src/fio-static/fio 00:02:07.687 + FIO_BIN=/usr/src/fio-static/fio 00:02:07.687 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:07.687 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:07.687 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:07.687 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.687 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.687 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:07.687 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.687 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.687 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:07.687 07:36:00 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:07.687 07:36:00 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:07.687 07:36:00 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.687 07:36:00 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:07.687 07:36:00 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:07.687 07:36:00 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:07.687 07:36:00 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:07.687 07:36:00 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:07.687 07:36:00 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:07.687 07:36:00 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:07.687 07:36:00 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:07.687 07:36:00 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:07.687 07:36:00 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:02:07.687 07:36:00 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:07.687 07:36:00 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:07.946 07:36:00 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:07.946 07:36:00 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:07.946 07:36:00 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:07.946 07:36:00 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:07.946 07:36:00 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:07.946 07:36:00 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:07.946 07:36:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.946 07:36:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.946 07:36:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.946 07:36:00 -- paths/export.sh@5 -- $ export PATH 00:02:07.946 07:36:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.946 07:36:00 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:07.946 07:36:00 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:07.946 07:36:00 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731911760.XXXXXX 00:02:07.946 07:36:00 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731911760.qq0P3K 00:02:07.946 07:36:00 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:07.946 07:36:00 -- common/autobuild_common.sh@492 -- $ '[' -n v23.11 ']' 00:02:07.946 07:36:00 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:07.946 07:36:00 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:07.946 07:36:00 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:07.946 07:36:00 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:07.946 07:36:00 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:07.946 07:36:00 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:07.946 07:36:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.946 07:36:00 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:07.946 07:36:00 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:07.946 07:36:00 -- pm/common@17 -- $ local monitor 00:02:07.946 07:36:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.946 07:36:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.946 07:36:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.946 07:36:00 -- pm/common@21 -- $ date +%s 00:02:07.946 07:36:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.946 07:36:00 -- pm/common@21 -- $ date +%s 00:02:07.946 07:36:00 -- pm/common@25 -- $ sleep 1 00:02:07.946 07:36:00 -- pm/common@21 -- $ date +%s 00:02:07.946 07:36:00 -- pm/common@21 -- $ date +%s 00:02:07.946 07:36:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731911760 00:02:07.946 07:36:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731911760 00:02:07.946 07:36:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731911760 00:02:07.946 07:36:00 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731911760 00:02:07.946 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731911760_collect-cpu-load.pm.log 00:02:07.946 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731911760_collect-vmstat.pm.log 00:02:07.946 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731911760_collect-cpu-temp.pm.log 00:02:07.946 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731911760_collect-bmc-pm.bmc.pm.log 00:02:08.882 07:36:01 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:08.882 07:36:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:08.882 07:36:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:08.882 07:36:01 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.882 07:36:01 -- spdk/autobuild.sh@16 -- $ date -u 00:02:08.882 Mon Nov 18 06:36:01 AM UTC 2024 00:02:08.882 07:36:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:08.882 v25.01-pre-189-g83e8405e4 00:02:08.883 07:36:01 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:08.883 07:36:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:08.883 07:36:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:08.883 07:36:01 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:08.883 07:36:01 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:08.883 07:36:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.883 ************************************ 00:02:08.883 START TEST ubsan 00:02:08.883 ************************************ 00:02:08.883 07:36:01 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:08.883 using ubsan 00:02:08.883 00:02:08.883 real 0m0.000s 00:02:08.883 user 0m0.000s 00:02:08.883 sys 0m0.000s 00:02:08.883 07:36:01 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:08.883 07:36:01 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:08.883 ************************************ 00:02:08.883 END TEST ubsan 00:02:08.883 ************************************ 00:02:08.883 07:36:01 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:08.883 07:36:01 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:08.883 07:36:01 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:08.883 07:36:01 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:08.883 07:36:01 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:08.883 07:36:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.883 ************************************ 00:02:08.883 START TEST build_native_dpdk 00:02:08.883 ************************************ 00:02:08.883 07:36:01 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:08.883 eeb0605f11 version: 23.11.0 00:02:08.883 238778122a doc: update release notes for 23.11 00:02:08.883 46aa6b3cfc doc: fix description of RSS features 00:02:08.883 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:08.883 7e421ae345 devtools: support skipping forbid rule check 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:08.883 patching file config/rte_config.h 00:02:08.883 Hunk #1 succeeded at 60 (offset 1 line). 00:02:08.883 07:36:01 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:08.883 07:36:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:08.884 07:36:01 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:08.884 patching file lib/pcapng/rte_pcapng.c 00:02:08.884 07:36:01 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:08.884 07:36:01 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:08.884 07:36:01 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:08.884 07:36:01 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:08.884 07:36:01 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:08.884 07:36:01 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:08.884 07:36:01 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:14.154 The Meson build system 00:02:14.154 Version: 1.5.0 00:02:14.154 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:14.154 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:14.154 Build type: native build 00:02:14.154 Program cat found: YES (/usr/bin/cat) 00:02:14.154 Project name: DPDK 00:02:14.154 Project version: 23.11.0 00:02:14.154 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:14.154 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:14.154 Host machine cpu family: x86_64 00:02:14.154 Host machine cpu: x86_64 00:02:14.154 Message: ## Building in Developer Mode ## 00:02:14.154 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:14.154 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:14.155 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:14.155 Program python3 found: YES (/usr/bin/python3) 00:02:14.155 Program cat found: YES (/usr/bin/cat) 00:02:14.155 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:14.155 Compiler for C supports arguments -march=native: YES 00:02:14.155 Checking for size of "void *" : 8 00:02:14.155 Checking for size of "void *" : 8 (cached) 00:02:14.155 Library m found: YES 00:02:14.155 Library numa found: YES 00:02:14.155 Has header "numaif.h" : YES 00:02:14.155 Library fdt found: NO 00:02:14.155 Library execinfo found: NO 00:02:14.155 Has header "execinfo.h" : YES 00:02:14.155 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:14.155 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:14.155 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:14.155 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:14.155 Run-time dependency openssl found: YES 3.1.1 00:02:14.155 Run-time dependency libpcap found: YES 1.10.4 00:02:14.155 Has header "pcap.h" with dependency libpcap: YES 00:02:14.155 Compiler for C supports arguments -Wcast-qual: YES 00:02:14.155 Compiler for C supports arguments -Wdeprecated: YES 00:02:14.155 Compiler for C supports arguments -Wformat: YES 00:02:14.155 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:14.155 Compiler for C supports arguments -Wformat-security: NO 00:02:14.155 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:14.155 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:14.155 Compiler for C supports arguments -Wnested-externs: YES 00:02:14.155 Compiler for C supports arguments -Wold-style-definition: YES 00:02:14.155 Compiler for C supports arguments -Wpointer-arith: YES 00:02:14.155 Compiler for C supports arguments -Wsign-compare: YES 00:02:14.155 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:14.155 Compiler for C supports arguments -Wundef: YES 00:02:14.155 Compiler for C supports arguments -Wwrite-strings: YES 00:02:14.155 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:14.155 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:14.155 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:14.155 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:14.155 Program objdump found: YES (/usr/bin/objdump) 00:02:14.155 Compiler for C supports arguments -mavx512f: YES 00:02:14.155 Checking if "AVX512 checking" compiles: YES 00:02:14.155 Fetching value of define "__SSE4_2__" : 1 00:02:14.155 Fetching value of define "__AES__" : 1 00:02:14.155 Fetching value of define "__AVX__" : 1 00:02:14.155 Fetching value of define "__AVX2__" : (undefined) 00:02:14.155 Fetching value of define "__AVX512BW__" : (undefined) 00:02:14.155 Fetching value of define "__AVX512CD__" : (undefined) 00:02:14.155 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:14.155 Fetching value of define "__AVX512F__" : (undefined) 00:02:14.155 Fetching value of define "__AVX512VL__" : (undefined) 00:02:14.155 Fetching value of define "__PCLMUL__" : 1 00:02:14.155 Fetching value of define "__RDRND__" : 1 00:02:14.155 Fetching value of define "__RDSEED__" : (undefined) 00:02:14.155 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:14.155 Fetching value of define "__znver1__" : (undefined) 00:02:14.155 Fetching value of define "__znver2__" : (undefined) 00:02:14.155 Fetching value of define "__znver3__" : (undefined) 00:02:14.155 Fetching value of define "__znver4__" : (undefined) 00:02:14.155 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:14.155 Message: lib/log: Defining dependency "log" 00:02:14.155 Message: lib/kvargs: Defining dependency "kvargs" 00:02:14.155 Message: lib/telemetry: Defining dependency "telemetry" 00:02:14.155 Checking for function "getentropy" : NO 00:02:14.155 Message: lib/eal: Defining dependency "eal" 00:02:14.155 Message: lib/ring: Defining dependency "ring" 00:02:14.155 Message: lib/rcu: Defining dependency "rcu" 00:02:14.155 Message: lib/mempool: Defining dependency "mempool" 00:02:14.155 Message: lib/mbuf: Defining dependency "mbuf" 00:02:14.155 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:14.155 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:14.155 Compiler for C supports arguments -mpclmul: YES 00:02:14.155 Compiler for C supports arguments -maes: YES 00:02:14.155 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:14.155 Compiler for C supports arguments -mavx512bw: YES 00:02:14.155 Compiler for C supports arguments -mavx512dq: YES 00:02:14.155 Compiler for C supports arguments -mavx512vl: YES 00:02:14.155 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:14.155 Compiler for C supports arguments -mavx2: YES 00:02:14.155 Compiler for C supports arguments -mavx: YES 00:02:14.155 Message: lib/net: Defining dependency "net" 00:02:14.155 Message: lib/meter: Defining dependency "meter" 00:02:14.155 Message: lib/ethdev: Defining dependency "ethdev" 00:02:14.155 Message: lib/pci: Defining dependency "pci" 00:02:14.155 Message: lib/cmdline: Defining dependency "cmdline" 00:02:14.155 Message: lib/metrics: Defining dependency "metrics" 00:02:14.155 Message: lib/hash: Defining dependency "hash" 00:02:14.155 Message: lib/timer: Defining dependency "timer" 00:02:14.155 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:14.155 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:14.155 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:14.155 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:14.155 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:14.155 Message: lib/acl: Defining dependency "acl" 00:02:14.155 Message: lib/bbdev: Defining dependency "bbdev" 00:02:14.155 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:14.155 Run-time dependency libelf found: YES 0.191 00:02:14.155 Message: lib/bpf: Defining dependency "bpf" 00:02:14.155 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:14.155 Message: lib/compressdev: Defining dependency "compressdev" 00:02:14.155 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:14.155 Message: lib/distributor: Defining dependency "distributor" 00:02:14.155 Message: lib/dmadev: Defining dependency "dmadev" 00:02:14.155 Message: lib/efd: Defining dependency "efd" 00:02:14.155 Message: lib/eventdev: Defining dependency "eventdev" 00:02:14.155 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:14.155 Message: lib/gpudev: Defining dependency "gpudev" 00:02:14.155 Message: lib/gro: Defining dependency "gro" 00:02:14.155 Message: lib/gso: Defining dependency "gso" 00:02:14.155 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:14.155 Message: lib/jobstats: Defining dependency "jobstats" 00:02:14.155 Message: lib/latencystats: Defining dependency "latencystats" 00:02:14.155 Message: lib/lpm: Defining dependency "lpm" 00:02:14.155 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:14.155 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:14.155 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:14.155 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:14.155 Message: lib/member: Defining dependency "member" 00:02:14.155 Message: lib/pcapng: Defining dependency "pcapng" 00:02:14.155 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:14.155 Message: lib/power: Defining dependency "power" 00:02:14.155 Message: lib/rawdev: Defining dependency "rawdev" 00:02:14.155 Message: lib/regexdev: Defining dependency "regexdev" 00:02:14.155 Message: lib/mldev: Defining dependency "mldev" 00:02:14.155 Message: lib/rib: Defining dependency "rib" 00:02:14.155 Message: lib/reorder: Defining dependency "reorder" 00:02:14.155 Message: lib/sched: Defining dependency "sched" 00:02:14.155 Message: lib/security: Defining dependency "security" 00:02:14.155 Message: lib/stack: Defining dependency "stack" 00:02:14.155 Has header "linux/userfaultfd.h" : YES 00:02:14.155 Has header "linux/vduse.h" : YES 00:02:14.155 Message: lib/vhost: Defining dependency "vhost" 00:02:14.155 Message: lib/ipsec: Defining dependency "ipsec" 00:02:14.155 Message: lib/pdcp: Defining dependency "pdcp" 00:02:14.155 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:14.155 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:14.155 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:14.155 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:14.155 Message: lib/fib: Defining dependency "fib" 00:02:14.155 Message: lib/port: Defining dependency "port" 00:02:14.155 Message: lib/pdump: Defining dependency "pdump" 00:02:14.155 Message: lib/table: Defining dependency "table" 00:02:14.155 Message: lib/pipeline: Defining dependency "pipeline" 00:02:14.155 Message: lib/graph: Defining dependency "graph" 00:02:14.155 Message: lib/node: Defining dependency "node" 00:02:15.537 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:15.537 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:15.537 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:15.537 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:15.537 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:15.537 Compiler for C supports arguments -Wno-unused-value: YES 00:02:15.537 Compiler for C supports arguments -Wno-format: YES 00:02:15.537 Compiler for C supports arguments -Wno-format-security: YES 00:02:15.537 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:15.537 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:15.537 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:15.537 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:15.537 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:15.537 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:15.537 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:15.537 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:15.537 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:15.537 Has header "sys/epoll.h" : YES 00:02:15.537 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:15.537 Configuring doxy-api-html.conf using configuration 00:02:15.537 Configuring doxy-api-man.conf using configuration 00:02:15.537 Program mandb found: YES (/usr/bin/mandb) 00:02:15.537 Program sphinx-build found: NO 00:02:15.537 Configuring rte_build_config.h using configuration 00:02:15.537 Message: 00:02:15.537 ================= 00:02:15.537 Applications Enabled 00:02:15.537 ================= 00:02:15.537 00:02:15.537 apps: 00:02:15.537 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:15.537 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:15.537 test-pmd, test-regex, test-sad, test-security-perf, 00:02:15.537 00:02:15.537 Message: 00:02:15.537 ================= 00:02:15.537 Libraries Enabled 00:02:15.537 ================= 00:02:15.537 00:02:15.537 libs: 00:02:15.537 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:15.537 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:15.537 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:15.537 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:15.537 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:15.537 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:15.537 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:15.537 00:02:15.537 00:02:15.537 Message: 00:02:15.537 =============== 00:02:15.537 Drivers Enabled 00:02:15.537 =============== 00:02:15.537 00:02:15.537 common: 00:02:15.537 00:02:15.537 bus: 00:02:15.537 pci, vdev, 00:02:15.537 mempool: 00:02:15.537 ring, 00:02:15.537 dma: 00:02:15.537 00:02:15.537 net: 00:02:15.537 i40e, 00:02:15.537 raw: 00:02:15.537 00:02:15.537 crypto: 00:02:15.537 00:02:15.537 compress: 00:02:15.537 00:02:15.537 regex: 00:02:15.537 00:02:15.537 ml: 00:02:15.537 00:02:15.537 vdpa: 00:02:15.537 00:02:15.537 event: 00:02:15.537 00:02:15.537 baseband: 00:02:15.537 00:02:15.537 gpu: 00:02:15.537 00:02:15.537 00:02:15.537 Message: 00:02:15.537 ================= 00:02:15.537 Content Skipped 00:02:15.537 ================= 00:02:15.537 00:02:15.537 apps: 00:02:15.537 00:02:15.537 libs: 00:02:15.537 00:02:15.537 drivers: 00:02:15.537 common/cpt: not in enabled drivers build config 00:02:15.537 common/dpaax: not in enabled drivers build config 00:02:15.537 common/iavf: not in enabled drivers build config 00:02:15.537 common/idpf: not in enabled drivers build config 00:02:15.537 common/mvep: not in enabled drivers build config 00:02:15.537 common/octeontx: not in enabled drivers build config 00:02:15.537 bus/auxiliary: not in enabled drivers build config 00:02:15.538 bus/cdx: not in enabled drivers build config 00:02:15.538 bus/dpaa: not in enabled drivers build config 00:02:15.538 bus/fslmc: not in enabled drivers build config 00:02:15.538 bus/ifpga: not in enabled drivers build config 00:02:15.538 bus/platform: not in enabled drivers build config 00:02:15.538 bus/vmbus: not in enabled drivers build config 00:02:15.538 common/cnxk: not in enabled drivers build config 00:02:15.538 common/mlx5: not in enabled drivers build config 00:02:15.538 common/nfp: not in enabled drivers build config 00:02:15.538 common/qat: not in enabled drivers build config 00:02:15.538 common/sfc_efx: not in enabled drivers build config 00:02:15.538 mempool/bucket: not in enabled drivers build config 00:02:15.538 mempool/cnxk: not in enabled drivers build config 00:02:15.538 mempool/dpaa: not in enabled drivers build config 00:02:15.538 mempool/dpaa2: not in enabled drivers build config 00:02:15.538 mempool/octeontx: not in enabled drivers build config 00:02:15.538 mempool/stack: not in enabled drivers build config 00:02:15.538 dma/cnxk: not in enabled drivers build config 00:02:15.538 dma/dpaa: not in enabled drivers build config 00:02:15.538 dma/dpaa2: not in enabled drivers build config 00:02:15.538 dma/hisilicon: not in enabled drivers build config 00:02:15.538 dma/idxd: not in enabled drivers build config 00:02:15.538 dma/ioat: not in enabled drivers build config 00:02:15.538 dma/skeleton: not in enabled drivers build config 00:02:15.538 net/af_packet: not in enabled drivers build config 00:02:15.538 net/af_xdp: not in enabled drivers build config 00:02:15.538 net/ark: not in enabled drivers build config 00:02:15.538 net/atlantic: not in enabled drivers build config 00:02:15.538 net/avp: not in enabled drivers build config 00:02:15.538 net/axgbe: not in enabled drivers build config 00:02:15.538 net/bnx2x: not in enabled drivers build config 00:02:15.538 net/bnxt: not in enabled drivers build config 00:02:15.538 net/bonding: not in enabled drivers build config 00:02:15.538 net/cnxk: not in enabled drivers build config 00:02:15.538 net/cpfl: not in enabled drivers build config 00:02:15.538 net/cxgbe: not in enabled drivers build config 00:02:15.538 net/dpaa: not in enabled drivers build config 00:02:15.538 net/dpaa2: not in enabled drivers build config 00:02:15.538 net/e1000: not in enabled drivers build config 00:02:15.538 net/ena: not in enabled drivers build config 00:02:15.538 net/enetc: not in enabled drivers build config 00:02:15.538 net/enetfec: not in enabled drivers build config 00:02:15.538 net/enic: not in enabled drivers build config 00:02:15.538 net/failsafe: not in enabled drivers build config 00:02:15.538 net/fm10k: not in enabled drivers build config 00:02:15.538 net/gve: not in enabled drivers build config 00:02:15.538 net/hinic: not in enabled drivers build config 00:02:15.538 net/hns3: not in enabled drivers build config 00:02:15.538 net/iavf: not in enabled drivers build config 00:02:15.538 net/ice: not in enabled drivers build config 00:02:15.538 net/idpf: not in enabled drivers build config 00:02:15.538 net/igc: not in enabled drivers build config 00:02:15.538 net/ionic: not in enabled drivers build config 00:02:15.538 net/ipn3ke: not in enabled drivers build config 00:02:15.538 net/ixgbe: not in enabled drivers build config 00:02:15.538 net/mana: not in enabled drivers build config 00:02:15.538 net/memif: not in enabled drivers build config 00:02:15.538 net/mlx4: not in enabled drivers build config 00:02:15.538 net/mlx5: not in enabled drivers build config 00:02:15.538 net/mvneta: not in enabled drivers build config 00:02:15.538 net/mvpp2: not in enabled drivers build config 00:02:15.538 net/netvsc: not in enabled drivers build config 00:02:15.538 net/nfb: not in enabled drivers build config 00:02:15.538 net/nfp: not in enabled drivers build config 00:02:15.538 net/ngbe: not in enabled drivers build config 00:02:15.538 net/null: not in enabled drivers build config 00:02:15.538 net/octeontx: not in enabled drivers build config 00:02:15.538 net/octeon_ep: not in enabled drivers build config 00:02:15.538 net/pcap: not in enabled drivers build config 00:02:15.538 net/pfe: not in enabled drivers build config 00:02:15.538 net/qede: not in enabled drivers build config 00:02:15.538 net/ring: not in enabled drivers build config 00:02:15.538 net/sfc: not in enabled drivers build config 00:02:15.538 net/softnic: not in enabled drivers build config 00:02:15.538 net/tap: not in enabled drivers build config 00:02:15.538 net/thunderx: not in enabled drivers build config 00:02:15.538 net/txgbe: not in enabled drivers build config 00:02:15.538 net/vdev_netvsc: not in enabled drivers build config 00:02:15.538 net/vhost: not in enabled drivers build config 00:02:15.538 net/virtio: not in enabled drivers build config 00:02:15.538 net/vmxnet3: not in enabled drivers build config 00:02:15.538 raw/cnxk_bphy: not in enabled drivers build config 00:02:15.538 raw/cnxk_gpio: not in enabled drivers build config 00:02:15.538 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:15.538 raw/ifpga: not in enabled drivers build config 00:02:15.538 raw/ntb: not in enabled drivers build config 00:02:15.538 raw/skeleton: not in enabled drivers build config 00:02:15.538 crypto/armv8: not in enabled drivers build config 00:02:15.538 crypto/bcmfs: not in enabled drivers build config 00:02:15.538 crypto/caam_jr: not in enabled drivers build config 00:02:15.538 crypto/ccp: not in enabled drivers build config 00:02:15.538 crypto/cnxk: not in enabled drivers build config 00:02:15.538 crypto/dpaa_sec: not in enabled drivers build config 00:02:15.538 crypto/dpaa2_sec: not in enabled drivers build config 00:02:15.538 crypto/ipsec_mb: not in enabled drivers build config 00:02:15.538 crypto/mlx5: not in enabled drivers build config 00:02:15.538 crypto/mvsam: not in enabled drivers build config 00:02:15.538 crypto/nitrox: not in enabled drivers build config 00:02:15.538 crypto/null: not in enabled drivers build config 00:02:15.538 crypto/octeontx: not in enabled drivers build config 00:02:15.538 crypto/openssl: not in enabled drivers build config 00:02:15.538 crypto/scheduler: not in enabled drivers build config 00:02:15.538 crypto/uadk: not in enabled drivers build config 00:02:15.538 crypto/virtio: not in enabled drivers build config 00:02:15.538 compress/isal: not in enabled drivers build config 00:02:15.538 compress/mlx5: not in enabled drivers build config 00:02:15.538 compress/octeontx: not in enabled drivers build config 00:02:15.538 compress/zlib: not in enabled drivers build config 00:02:15.538 regex/mlx5: not in enabled drivers build config 00:02:15.538 regex/cn9k: not in enabled drivers build config 00:02:15.538 ml/cnxk: not in enabled drivers build config 00:02:15.538 vdpa/ifc: not in enabled drivers build config 00:02:15.538 vdpa/mlx5: not in enabled drivers build config 00:02:15.538 vdpa/nfp: not in enabled drivers build config 00:02:15.538 vdpa/sfc: not in enabled drivers build config 00:02:15.538 event/cnxk: not in enabled drivers build config 00:02:15.538 event/dlb2: not in enabled drivers build config 00:02:15.538 event/dpaa: not in enabled drivers build config 00:02:15.538 event/dpaa2: not in enabled drivers build config 00:02:15.538 event/dsw: not in enabled drivers build config 00:02:15.538 event/opdl: not in enabled drivers build config 00:02:15.538 event/skeleton: not in enabled drivers build config 00:02:15.538 event/sw: not in enabled drivers build config 00:02:15.538 event/octeontx: not in enabled drivers build config 00:02:15.538 baseband/acc: not in enabled drivers build config 00:02:15.538 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:15.538 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:15.538 baseband/la12xx: not in enabled drivers build config 00:02:15.538 baseband/null: not in enabled drivers build config 00:02:15.538 baseband/turbo_sw: not in enabled drivers build config 00:02:15.538 gpu/cuda: not in enabled drivers build config 00:02:15.538 00:02:15.538 00:02:15.538 Build targets in project: 220 00:02:15.538 00:02:15.538 DPDK 23.11.0 00:02:15.538 00:02:15.538 User defined options 00:02:15.538 libdir : lib 00:02:15.538 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:15.538 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:15.538 c_link_args : 00:02:15.538 enable_docs : false 00:02:15.538 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:15.538 enable_kmods : false 00:02:15.538 machine : native 00:02:15.538 tests : false 00:02:15.538 00:02:15.538 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:15.538 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:15.538 07:36:08 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:02:15.538 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:15.538 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:15.538 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:15.538 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:15.538 [4/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:15.538 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:15.538 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:15.538 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:15.538 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:15.538 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:15.538 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:15.538 [11/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:15.538 [12/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:15.538 [13/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:15.796 [14/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:15.796 [15/710] Linking static target lib/librte_kvargs.a 00:02:15.796 [16/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:15.796 [17/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:15.796 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:15.796 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:15.796 [20/710] Linking static target lib/librte_log.a 00:02:15.796 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:16.368 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.629 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:16.629 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:16.629 [25/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:16.629 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:16.629 [27/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:16.629 [28/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.629 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:16.629 [30/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:16.629 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:16.629 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:16.629 [33/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:16.629 [34/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:16.629 [35/710] Linking target lib/librte_log.so.24.0 00:02:16.629 [36/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:16.629 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:16.629 [38/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:16.629 [39/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:16.629 [40/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:16.629 [41/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:16.629 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:16.889 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:16.889 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:16.889 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:16.889 [46/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:16.889 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:16.889 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:16.889 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:16.889 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:16.889 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:16.889 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:16.889 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:16.889 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:16.889 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:16.889 [56/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:16.889 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:16.889 [58/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:16.889 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:16.889 [60/710] Linking target lib/librte_kvargs.so.24.0 00:02:16.889 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:17.151 [62/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:17.151 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:17.151 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:17.151 [65/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:17.409 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:17.409 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:17.409 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:17.409 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:17.409 [70/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:17.409 [71/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:17.409 [72/710] Linking static target lib/librte_pci.a 00:02:17.409 [73/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:17.672 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:17.672 [75/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:17.672 [76/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:17.672 [77/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:17.672 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:17.672 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:17.934 [80/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:17.934 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:17.934 [82/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:17.934 [83/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:17.934 [84/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:17.934 [85/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.934 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:17.934 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:17.934 [88/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:17.934 [89/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:17.934 [90/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:17.934 [91/710] Linking static target lib/librte_ring.a 00:02:17.934 [92/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:17.934 [93/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:17.934 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:17.934 [95/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:17.934 [96/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:17.934 [97/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:18.194 [98/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:18.194 [99/710] Linking static target lib/librte_meter.a 00:02:18.194 [100/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:18.194 [101/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:18.194 [102/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:18.194 [103/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:18.194 [104/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:18.194 [105/710] Linking static target lib/librte_telemetry.a 00:02:18.194 [106/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:18.194 [107/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:18.194 [108/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:18.194 [109/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:18.194 [110/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:18.194 [111/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:18.464 [112/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:18.464 [113/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.464 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:18.464 [115/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.464 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:18.464 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:18.464 [118/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:18.464 [119/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:18.464 [120/710] Linking static target lib/librte_net.a 00:02:18.725 [121/710] Linking static target lib/librte_eal.a 00:02:18.725 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:18.725 [123/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:18.725 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:18.725 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:18.725 [126/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:18.725 [127/710] Linking static target lib/librte_cmdline.a 00:02:18.725 [128/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:18.725 [129/710] Linking static target lib/librte_mempool.a 00:02:18.987 [130/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.987 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:18.987 [132/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:18.987 [133/710] Linking static target lib/librte_cfgfile.a 00:02:18.987 [134/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:18.987 [135/710] Linking target lib/librte_telemetry.so.24.0 00:02:18.987 [136/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.987 [137/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:18.987 [138/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:18.987 [139/710] Linking static target lib/librte_metrics.a 00:02:19.249 [140/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:19.249 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:19.249 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:19.249 [143/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:19.249 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:19.249 [145/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:19.513 [146/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:19.513 [147/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:19.513 [148/710] Linking static target lib/librte_rcu.a 00:02:19.513 [149/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:19.513 [150/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:19.513 [151/710] Linking static target lib/librte_bitratestats.a 00:02:19.513 [152/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:19.778 [153/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:19.778 [154/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.778 [155/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:19.778 [156/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:19.778 [157/710] Linking static target lib/librte_timer.a 00:02:19.778 [158/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.778 [159/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:19.778 [160/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:19.778 [161/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:19.778 [162/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.778 [163/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.038 [164/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:20.038 [165/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.038 [166/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:20.038 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:20.038 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:20.038 [169/710] Linking static target lib/librte_bbdev.a 00:02:20.038 [170/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:20.300 [171/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.300 [172/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:20.300 [173/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:20.300 [174/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:20.300 [175/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:20.300 [176/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.300 [177/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:20.300 [178/710] Linking static target lib/librte_compressdev.a 00:02:20.561 [179/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:20.561 [180/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:20.561 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:20.820 [182/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:20.820 [183/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:20.820 [184/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:20.820 [185/710] Linking static target lib/librte_distributor.a 00:02:20.820 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:21.080 [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.080 [188/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:21.080 [189/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:21.080 [190/710] Linking static target lib/librte_bpf.a 00:02:21.080 [191/710] Linking static target lib/librte_dmadev.a 00:02:21.080 [192/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.080 [193/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:21.350 [194/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:21.350 [195/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.350 [196/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:21.350 [197/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:21.350 [198/710] Linking static target lib/librte_dispatcher.a 00:02:21.350 [199/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:21.350 [200/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:21.350 [201/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:21.350 [202/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:21.350 [203/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:21.350 [204/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:21.615 [205/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:21.615 [206/710] Linking static target lib/librte_gpudev.a 00:02:21.615 [207/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:21.615 [208/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:21.615 [209/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:21.615 [210/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:21.615 [211/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.615 [212/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:21.615 [213/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:21.615 [214/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:21.615 [215/710] Linking static target lib/librte_gro.a 00:02:21.615 [216/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.874 [217/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:21.874 [218/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:21.874 [219/710] Linking static target lib/librte_jobstats.a 00:02:21.874 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:21.874 [221/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:22.138 [222/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.138 [223/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.138 [224/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:22.138 [225/710] Linking static target lib/librte_latencystats.a 00:02:22.138 [226/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:22.138 [227/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:22.138 [228/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:22.138 [229/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:22.138 [230/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:22.400 [231/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.400 [232/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:22.400 [233/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:22.400 [234/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:22.400 [235/710] Linking static target lib/librte_ip_frag.a 00:02:22.400 [236/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:22.661 [237/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:22.661 [238/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.661 [239/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:22.661 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:22.925 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:22.925 [242/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:22.925 [243/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:22.925 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:22.925 [245/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.925 [246/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.925 [247/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:22.925 [248/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:22.925 [249/710] Linking static target lib/librte_gso.a 00:02:23.190 [250/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:23.190 [251/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:23.190 [252/710] Linking static target lib/librte_regexdev.a 00:02:23.190 [253/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:23.190 [254/710] Linking static target lib/librte_rawdev.a 00:02:23.190 [255/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:23.190 [256/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:23.452 [257/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:23.452 [258/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:23.452 [259/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.452 [260/710] Linking static target lib/librte_efd.a 00:02:23.452 [261/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:23.452 [262/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:23.452 [263/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:23.452 [264/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:23.452 [265/710] Linking static target lib/librte_mldev.a 00:02:23.452 [266/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:23.452 [267/710] Linking static target lib/librte_pcapng.a 00:02:23.715 [268/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:23.715 [269/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:23.715 [270/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:23.715 [271/710] Linking static target lib/acl/libavx2_tmp.a 00:02:23.715 [272/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:23.715 [273/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:23.715 [274/710] Linking static target lib/librte_stack.a 00:02:23.715 [275/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.715 [276/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:23.715 [277/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:23.715 [278/710] Linking static target lib/librte_lpm.a 00:02:23.979 [279/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:23.979 [280/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:23.979 [281/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:23.979 [282/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.979 [283/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:23.979 [284/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.979 [285/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:23.979 [286/710] Linking static target lib/librte_hash.a 00:02:23.979 [287/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.244 [288/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:24.244 [289/710] Linking static target lib/acl/libavx512_tmp.a 00:02:24.244 [290/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:24.244 [291/710] Linking static target lib/librte_acl.a 00:02:24.244 [292/710] Linking static target lib/librte_power.a 00:02:24.244 [293/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:24.244 [294/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:24.244 [295/710] Linking static target lib/librte_reorder.a 00:02:24.244 [296/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.244 [297/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:24.244 [298/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:24.244 [299/710] Linking static target lib/librte_security.a 00:02:24.244 [300/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.513 [301/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:24.513 [302/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:24.513 [303/710] Linking static target lib/librte_mbuf.a 00:02:24.513 [304/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:24.513 [305/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:24.775 [306/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:24.775 [307/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:24.775 [308/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.775 [309/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:24.775 [310/710] Linking static target lib/librte_rib.a 00:02:24.775 [311/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:24.775 [312/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.775 [313/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:24.775 [314/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:25.044 [315/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.044 [316/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:25.044 [317/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:25.044 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.044 [319/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:25.044 [320/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:25.044 [321/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:25.044 [322/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:25.044 [323/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:25.304 [324/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:25.304 [325/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:25.304 [326/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.304 [327/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:25.304 [328/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:25.567 [329/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.567 [330/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.567 [331/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.567 [332/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:25.827 [333/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:25.827 [334/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:25.827 [335/710] Linking static target lib/librte_member.a 00:02:25.827 [336/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:26.087 [337/710] Linking static target lib/librte_eventdev.a 00:02:26.087 [338/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:26.087 [339/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:26.087 [340/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:26.087 [341/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:26.347 [342/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:26.347 [343/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:26.347 [344/710] Linking static target lib/librte_cryptodev.a 00:02:26.347 [345/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.347 [346/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:26.347 [347/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:26.347 [348/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:26.347 [349/710] Linking static target lib/librte_ethdev.a 00:02:26.347 [350/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:26.347 [351/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:26.347 [352/710] Linking static target lib/librte_sched.a 00:02:26.347 [353/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:26.347 [354/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:26.347 [355/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:26.347 [356/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:26.609 [357/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:26.609 [358/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:26.609 [359/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:26.609 [360/710] Linking static target lib/librte_fib.a 00:02:26.609 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:26.609 [362/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:26.870 [363/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:26.870 [364/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:26.870 [365/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:26.870 [366/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:26.870 [367/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:26.870 [368/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:27.137 [369/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.137 [370/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:27.137 [371/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:27.137 [372/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:27.137 [373/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.399 [374/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:27.399 [375/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:27.399 [376/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:27.399 [377/710] Linking static target lib/librte_pdump.a 00:02:27.662 [378/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:27.662 [379/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:27.662 [380/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:27.662 [381/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:27.662 [382/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:27.662 [383/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:27.662 [384/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:27.662 [385/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:27.662 [386/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:27.662 [387/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:27.925 [388/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:27.925 [389/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.925 [390/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:27.925 [391/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:27.925 [392/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:27.925 [393/710] Linking static target lib/librte_ipsec.a 00:02:27.925 [394/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:28.186 [395/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:28.186 [396/710] Linking static target lib/librte_table.a 00:02:28.186 [397/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:28.449 [398/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.449 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:28.449 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:28.449 [401/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.709 [402/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:28.709 [403/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:28.709 [404/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:28.972 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:28.972 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:28.972 [407/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:28.972 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:28.972 [409/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:28.972 [410/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:29.235 [411/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:29.235 [412/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:29.235 [413/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:29.235 [414/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:29.235 [415/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.499 [416/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:29.499 [417/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:29.499 [418/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:29.499 [419/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.499 [420/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:29.499 [421/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:29.499 [422/710] Linking static target drivers/librte_bus_vdev.a 00:02:29.499 [423/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:29.499 [424/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:29.763 [425/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:29.763 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:29.763 [427/710] Linking static target lib/librte_port.a 00:02:30.025 [428/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:30.025 [429/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:30.025 [430/710] Linking static target lib/librte_graph.a 00:02:30.025 [431/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:30.025 [432/710] Linking static target drivers/librte_bus_pci.a 00:02:30.025 [433/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.025 [434/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.025 [435/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:30.025 [436/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:30.025 [437/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:30.025 [438/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:30.025 [439/710] Linking target lib/librte_eal.so.24.0 00:02:30.286 [440/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:30.286 [441/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:30.286 [442/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:30.553 [443/710] Linking target lib/librte_ring.so.24.0 00:02:30.553 [444/710] Linking target lib/librte_meter.so.24.0 00:02:30.553 [445/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:30.553 [446/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:30.553 [447/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:30.553 [448/710] Linking target lib/librte_pci.so.24.0 00:02:30.553 [449/710] Linking target lib/librte_timer.so.24.0 00:02:30.553 [450/710] Linking target lib/librte_acl.so.24.0 00:02:30.818 [451/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:30.818 [452/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:30.818 [453/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:30.818 [454/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.818 [455/710] Linking target lib/librte_rcu.so.24.0 00:02:30.818 [456/710] Linking target lib/librte_mempool.so.24.0 00:02:30.818 [457/710] Linking target lib/librte_cfgfile.so.24.0 00:02:30.818 [458/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:30.818 [459/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.818 [460/710] Linking target lib/librte_dmadev.so.24.0 00:02:30.818 [461/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:30.818 [462/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:30.818 [463/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:30.818 [464/710] Linking target lib/librte_jobstats.so.24.0 00:02:30.818 [465/710] Linking target lib/librte_stack.so.24.0 00:02:30.818 [466/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:30.818 [467/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:30.818 [468/710] Linking target lib/librte_rawdev.so.24.0 00:02:30.818 [469/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:30.818 [470/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:30.818 [471/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:31.081 [472/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.081 [473/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:31.081 [474/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:31.081 [475/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:31.081 [476/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:31.081 [477/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:31.081 [478/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:31.081 [479/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:31.081 [480/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:31.081 [481/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:31.081 [482/710] Linking target lib/librte_mbuf.so.24.0 00:02:31.081 [483/710] Linking target lib/librte_rib.so.24.0 00:02:31.081 [484/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:31.081 [485/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:31.081 [486/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:31.081 [487/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:31.081 [488/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:31.348 [489/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:31.348 [490/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:31.348 [491/710] Linking static target drivers/librte_mempool_ring.a 00:02:31.348 [492/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:31.348 [493/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:31.348 [494/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:31.348 [495/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:31.348 [496/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:31.348 [497/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:31.348 [498/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:31.348 [499/710] Linking target lib/librte_net.so.24.0 00:02:31.348 [500/710] Linking target lib/librte_compressdev.so.24.0 00:02:31.348 [501/710] Linking target lib/librte_bbdev.so.24.0 00:02:31.611 [502/710] Linking target lib/librte_distributor.so.24.0 00:02:31.611 [503/710] Linking target lib/librte_cryptodev.so.24.0 00:02:31.611 [504/710] Linking target lib/librte_gpudev.so.24.0 00:02:31.611 [505/710] Linking target lib/librte_regexdev.so.24.0 00:02:31.611 [506/710] Linking target lib/librte_reorder.so.24.0 00:02:31.611 [507/710] Linking target lib/librte_sched.so.24.0 00:02:31.611 [508/710] Linking target lib/librte_mldev.so.24.0 00:02:31.611 [509/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:31.611 [510/710] Linking target lib/librte_fib.so.24.0 00:02:31.611 [511/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:31.611 [512/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:31.874 [513/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:31.874 [514/710] Linking target lib/librte_cmdline.so.24.0 00:02:31.874 [515/710] Linking target lib/librte_hash.so.24.0 00:02:31.874 [516/710] Linking target lib/librte_security.so.24.0 00:02:31.874 [517/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:31.874 [518/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:32.136 [519/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:32.136 [520/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:32.136 [521/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:32.136 [522/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:32.136 [523/710] Linking target lib/librte_efd.so.24.0 00:02:32.136 [524/710] Linking target lib/librte_lpm.so.24.0 00:02:32.136 [525/710] Linking target lib/librte_member.so.24.0 00:02:32.136 [526/710] Linking target lib/librte_ipsec.so.24.0 00:02:32.136 [527/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:32.400 [528/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:32.400 [529/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:32.400 [530/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:32.400 [531/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:32.400 [532/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:32.400 [533/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:32.400 [534/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:32.664 [535/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:32.664 [536/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:32.664 [537/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:32.664 [538/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:32.664 [539/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:32.664 [540/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:32.664 [541/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:32.664 [542/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:32.929 [543/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:33.189 [544/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:33.189 [545/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:33.189 [546/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:33.189 [547/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:33.189 [548/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:33.190 [549/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:33.190 [550/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:33.450 [551/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:33.450 [552/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:33.711 [553/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:33.711 [554/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:33.711 [555/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:33.711 [556/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:33.711 [557/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:33.976 [558/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:33.976 [559/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:34.237 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:34.497 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:34.497 [562/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:34.497 [563/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:34.497 [564/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:34.759 [565/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:34.759 [566/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:34.759 [567/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.759 [568/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:34.759 [569/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:34.759 [570/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:34.759 [571/710] Linking target lib/librte_ethdev.so.24.0 00:02:34.759 [572/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:35.022 [573/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:35.022 [574/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:35.022 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:35.022 [576/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:35.022 [577/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:35.022 [578/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:35.022 [579/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:35.283 [580/710] Linking target lib/librte_metrics.so.24.0 00:02:35.283 [581/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:35.283 [582/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:35.283 [583/710] Linking target lib/librte_bpf.so.24.0 00:02:35.283 [584/710] Linking target lib/librte_eventdev.so.24.0 00:02:35.283 [585/710] Linking target lib/librte_gro.so.24.0 00:02:35.283 [586/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:35.283 [587/710] Linking target lib/librte_gso.so.24.0 00:02:35.546 [588/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:35.546 [589/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:35.546 [590/710] Linking target lib/librte_ip_frag.so.24.0 00:02:35.546 [591/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:35.546 [592/710] Linking target lib/librte_pcapng.so.24.0 00:02:35.546 [593/710] Linking target lib/librte_power.so.24.0 00:02:35.546 [594/710] Linking static target lib/librte_pdcp.a 00:02:35.546 [595/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:35.546 [596/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:35.546 [597/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:35.546 [598/710] Linking target lib/librte_latencystats.so.24.0 00:02:35.546 [599/710] Linking target lib/librte_bitratestats.so.24.0 00:02:35.546 [600/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:35.546 [601/710] Linking target lib/librte_dispatcher.so.24.0 00:02:35.546 [602/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:35.546 [603/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:35.806 [604/710] Linking target lib/librte_pdump.so.24.0 00:02:35.806 [605/710] Linking target lib/librte_port.so.24.0 00:02:35.806 [606/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:35.806 [607/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:35.806 [608/710] Linking target lib/librte_graph.so.24.0 00:02:35.806 [609/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:35.806 [610/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:36.069 [611/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:36.069 [612/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.069 [613/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:36.069 [614/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:36.069 [615/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:36.069 [616/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:36.069 [617/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:36.069 [618/710] Linking target lib/librte_pdcp.so.24.0 00:02:36.069 [619/710] Linking target lib/librte_table.so.24.0 00:02:36.332 [620/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:36.332 [621/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:36.332 [622/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:36.332 [623/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:36.332 [624/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:36.332 [625/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:36.332 [626/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:36.592 [627/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:36.592 [628/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:36.592 [629/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:36.850 [630/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:36.850 [631/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:37.110 [632/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:37.110 [633/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:37.110 [634/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:37.110 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:37.110 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:37.368 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:37.368 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:37.368 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:37.368 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:37.368 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:37.368 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:37.627 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:37.627 [644/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:37.627 [645/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:37.627 [646/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:37.897 [647/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:37.897 [648/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:37.897 [649/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:37.897 [650/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:38.177 [651/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:38.177 [652/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:38.177 [653/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:38.177 [654/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:38.465 [655/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:38.465 [656/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:38.465 [657/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:38.465 [658/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:38.465 [659/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:38.465 [660/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:38.762 [661/710] Linking static target drivers/librte_net_i40e.a 00:02:38.762 [662/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:38.762 [663/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:38.762 [664/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:39.329 [665/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.329 [666/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:39.329 [667/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:39.329 [668/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:39.329 [669/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:39.587 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:39.845 [671/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:40.103 [672/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:40.103 [673/710] Linking static target lib/librte_node.a 00:02:40.361 [674/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.361 [675/710] Linking target lib/librte_node.so.24.0 00:02:40.361 [676/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:41.294 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:41.552 [678/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:41.810 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:43.185 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:43.752 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:50.311 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:22.412 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:22.412 [684/710] Linking static target lib/librte_vhost.a 00:03:22.412 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.412 [686/710] Linking target lib/librte_vhost.so.24.0 00:03:30.533 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:30.533 [688/710] Linking static target lib/librte_pipeline.a 00:03:30.791 [689/710] Linking target app/dpdk-dumpcap 00:03:30.791 [690/710] Linking target app/dpdk-proc-info 00:03:30.791 [691/710] Linking target app/dpdk-pdump 00:03:30.791 [692/710] Linking target app/dpdk-test-cmdline 00:03:30.791 [693/710] Linking target app/dpdk-test-gpudev 00:03:30.791 [694/710] Linking target app/dpdk-test-regex 00:03:30.791 [695/710] Linking target app/dpdk-test-dma-perf 00:03:30.791 [696/710] Linking target app/dpdk-test-fib 00:03:30.791 [697/710] Linking target app/dpdk-test-flow-perf 00:03:30.791 [698/710] Linking target app/dpdk-graph 00:03:30.791 [699/710] Linking target app/dpdk-test-acl 00:03:30.791 [700/710] Linking target app/dpdk-test-bbdev 00:03:30.791 [701/710] Linking target app/dpdk-test-pipeline 00:03:30.791 [702/710] Linking target app/dpdk-test-sad 00:03:30.791 [703/710] Linking target app/dpdk-test-security-perf 00:03:30.791 [704/710] Linking target app/dpdk-test-mldev 00:03:30.791 [705/710] Linking target app/dpdk-test-crypto-perf 00:03:30.791 [706/710] Linking target app/dpdk-test-eventdev 00:03:30.791 [707/710] Linking target app/dpdk-test-compress-perf 00:03:31.048 [708/710] Linking target app/dpdk-testpmd 00:03:32.953 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.211 [710/710] Linking target lib/librte_pipeline.so.24.0 00:03:33.211 07:37:26 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:33.211 07:37:26 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:33.211 07:37:26 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:33.211 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:33.211 [0/1] Installing files. 00:03:33.472 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:33.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:33.473 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:33.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:33.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:33.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:33.739 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:33.739 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:34.311 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:34.311 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:34.311 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.311 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:34.311 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.311 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.311 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.311 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.311 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.311 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.311 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.311 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.311 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.311 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.311 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.311 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.311 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.311 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.311 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.311 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.311 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.311 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.311 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.311 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.311 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.312 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:34.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:34.315 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:03:34.315 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:03:34.315 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:03:34.315 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:34.315 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:03:34.315 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:34.315 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:03:34.315 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:34.315 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:03:34.315 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:34.315 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:03:34.315 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:34.315 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:03:34.315 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:34.315 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:03:34.315 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:34.315 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:03:34.315 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:34.315 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:03:34.315 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:34.315 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:03:34.315 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:34.315 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:03:34.315 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:34.315 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:03:34.315 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:34.315 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:03:34.316 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:34.316 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:03:34.316 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:34.316 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:03:34.316 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:34.316 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:03:34.316 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:34.316 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:03:34.316 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:34.316 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:03:34.316 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:34.316 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:03:34.316 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:34.316 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:03:34.316 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:34.316 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:03:34.316 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:34.316 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:03:34.316 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:34.316 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:03:34.316 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:34.316 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:03:34.316 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:34.316 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:03:34.316 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:34.316 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:03:34.316 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:34.316 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:03:34.316 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:03:34.316 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:03:34.316 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:34.316 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:03:34.316 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:34.316 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:03:34.316 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:34.316 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:03:34.316 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:34.316 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:03:34.316 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:34.316 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:03:34.316 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:34.316 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:03:34.316 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:34.316 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:03:34.316 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:34.316 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:03:34.316 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:34.316 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:03:34.316 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:34.316 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:03:34.316 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:34.316 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:03:34.316 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:34.316 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:34.316 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:34.316 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:34.316 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:34.316 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:34.316 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:34.316 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:34.316 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:34.316 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:34.316 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:34.316 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:34.316 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:34.316 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:03:34.316 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:03:34.316 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:03:34.316 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:34.316 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:03:34.316 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:34.316 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:03:34.316 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:34.316 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:03:34.316 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:34.316 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:03:34.316 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:34.316 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:03:34.316 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:34.316 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:03:34.316 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:34.316 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:03:34.316 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:03:34.316 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:03:34.316 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:34.316 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:03:34.316 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:34.316 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:03:34.316 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:34.316 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:03:34.316 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:34.316 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:03:34.316 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:34.316 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:03:34.316 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:34.316 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:03:34.317 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:34.317 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:34.317 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:34.317 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:34.317 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:34.317 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:34.317 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:34.317 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:34.317 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:34.317 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:34.317 07:37:27 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:34.576 07:37:27 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:34.576 00:03:34.576 real 1m25.505s 00:03:34.576 user 18m2.846s 00:03:34.576 sys 2m9.080s 00:03:34.576 07:37:27 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:34.576 07:37:27 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:34.576 ************************************ 00:03:34.576 END TEST build_native_dpdk 00:03:34.576 ************************************ 00:03:34.576 07:37:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:34.576 07:37:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:34.576 07:37:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:34.576 07:37:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:34.576 07:37:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:34.576 07:37:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:34.576 07:37:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:34.576 07:37:27 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:34.576 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:34.576 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:34.576 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:34.576 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:34.836 Using 'verbs' RDMA provider 00:03:45.773 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:55.764 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:55.764 Creating mk/config.mk...done. 00:03:55.764 Creating mk/cc.flags.mk...done. 00:03:55.764 Type 'make' to build. 00:03:55.764 07:37:47 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:03:55.764 07:37:47 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:55.764 07:37:47 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:55.764 07:37:47 -- common/autotest_common.sh@10 -- $ set +x 00:03:55.764 ************************************ 00:03:55.764 START TEST make 00:03:55.764 ************************************ 00:03:55.764 07:37:47 make -- common/autotest_common.sh@1129 -- $ make -j48 00:03:55.764 make[1]: Nothing to be done for 'all'. 00:03:56.706 The Meson build system 00:03:56.706 Version: 1.5.0 00:03:56.706 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:56.706 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:56.706 Build type: native build 00:03:56.706 Project name: libvfio-user 00:03:56.706 Project version: 0.0.1 00:03:56.706 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:56.706 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:56.706 Host machine cpu family: x86_64 00:03:56.706 Host machine cpu: x86_64 00:03:56.706 Run-time dependency threads found: YES 00:03:56.706 Library dl found: YES 00:03:56.706 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:56.707 Run-time dependency json-c found: YES 0.17 00:03:56.707 Run-time dependency cmocka found: YES 1.1.7 00:03:56.707 Program pytest-3 found: NO 00:03:56.707 Program flake8 found: NO 00:03:56.707 Program misspell-fixer found: NO 00:03:56.707 Program restructuredtext-lint found: NO 00:03:56.707 Program valgrind found: YES (/usr/bin/valgrind) 00:03:56.707 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:56.707 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:56.707 Compiler for C supports arguments -Wwrite-strings: YES 00:03:56.707 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:56.707 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:56.707 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:56.707 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:56.707 Build targets in project: 8 00:03:56.707 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:56.707 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:56.707 00:03:56.707 libvfio-user 0.0.1 00:03:56.707 00:03:56.707 User defined options 00:03:56.707 buildtype : debug 00:03:56.707 default_library: shared 00:03:56.707 libdir : /usr/local/lib 00:03:56.707 00:03:56.707 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:57.659 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:57.659 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:57.659 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:57.659 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:57.922 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:57.922 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:57.922 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:57.922 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:57.922 [8/37] Compiling C object samples/null.p/null.c.o 00:03:57.922 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:57.922 [10/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:57.922 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:57.922 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:57.922 [13/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:57.922 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:57.922 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:57.922 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:57.922 [17/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:57.922 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:57.922 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:57.922 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:57.922 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:57.922 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:57.922 [23/37] Compiling C object samples/server.p/server.c.o 00:03:57.922 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:57.922 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:57.922 [26/37] Compiling C object samples/client.p/client.c.o 00:03:57.922 [27/37] Linking target samples/client 00:03:58.187 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:58.187 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:58.187 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:58.187 [31/37] Linking target test/unit_tests 00:03:58.447 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:58.447 [33/37] Linking target samples/server 00:03:58.447 [34/37] Linking target samples/null 00:03:58.447 [35/37] Linking target samples/gpio-pci-idio-16 00:03:58.447 [36/37] Linking target samples/lspci 00:03:58.447 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:58.447 INFO: autodetecting backend as ninja 00:03:58.447 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:58.447 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:59.388 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:59.388 ninja: no work to do. 00:04:38.095 CC lib/log/log.o 00:04:38.095 CC lib/log/log_flags.o 00:04:38.095 CC lib/log/log_deprecated.o 00:04:38.095 CC lib/ut/ut.o 00:04:38.095 CC lib/ut_mock/mock.o 00:04:38.095 LIB libspdk_ut.a 00:04:38.095 LIB libspdk_log.a 00:04:38.095 LIB libspdk_ut_mock.a 00:04:38.095 SO libspdk_ut.so.2.0 00:04:38.095 SO libspdk_ut_mock.so.6.0 00:04:38.095 SO libspdk_log.so.7.1 00:04:38.096 SYMLINK libspdk_ut.so 00:04:38.096 SYMLINK libspdk_ut_mock.so 00:04:38.096 SYMLINK libspdk_log.so 00:04:38.096 CXX lib/trace_parser/trace.o 00:04:38.096 CC lib/dma/dma.o 00:04:38.096 CC lib/ioat/ioat.o 00:04:38.096 CC lib/util/base64.o 00:04:38.096 CC lib/util/bit_array.o 00:04:38.096 CC lib/util/cpuset.o 00:04:38.096 CC lib/util/crc16.o 00:04:38.096 CC lib/util/crc32.o 00:04:38.096 CC lib/util/crc32c.o 00:04:38.096 CC lib/util/crc32_ieee.o 00:04:38.096 CC lib/util/crc64.o 00:04:38.096 CC lib/util/dif.o 00:04:38.096 CC lib/util/fd.o 00:04:38.096 CC lib/util/fd_group.o 00:04:38.096 CC lib/util/file.o 00:04:38.096 CC lib/util/hexlify.o 00:04:38.096 CC lib/util/iov.o 00:04:38.096 CC lib/util/math.o 00:04:38.096 CC lib/util/net.o 00:04:38.096 CC lib/util/pipe.o 00:04:38.096 CC lib/util/strerror_tls.o 00:04:38.096 CC lib/util/string.o 00:04:38.096 CC lib/util/uuid.o 00:04:38.096 CC lib/util/zipf.o 00:04:38.096 CC lib/util/xor.o 00:04:38.096 CC lib/util/md5.o 00:04:38.096 CC lib/vfio_user/host/vfio_user_pci.o 00:04:38.096 CC lib/vfio_user/host/vfio_user.o 00:04:38.096 LIB libspdk_dma.a 00:04:38.096 SO libspdk_dma.so.5.0 00:04:38.096 SYMLINK libspdk_dma.so 00:04:38.096 LIB libspdk_ioat.a 00:04:38.096 SO libspdk_ioat.so.7.0 00:04:38.096 SYMLINK libspdk_ioat.so 00:04:38.096 LIB libspdk_vfio_user.a 00:04:38.096 SO libspdk_vfio_user.so.5.0 00:04:38.096 SYMLINK libspdk_vfio_user.so 00:04:38.096 LIB libspdk_util.a 00:04:38.096 SO libspdk_util.so.10.1 00:04:38.096 SYMLINK libspdk_util.so 00:04:38.096 CC lib/conf/conf.o 00:04:38.096 CC lib/vmd/vmd.o 00:04:38.096 CC lib/rdma_utils/rdma_utils.o 00:04:38.096 CC lib/idxd/idxd.o 00:04:38.096 CC lib/json/json_parse.o 00:04:38.096 CC lib/vmd/led.o 00:04:38.096 CC lib/env_dpdk/env.o 00:04:38.096 CC lib/idxd/idxd_user.o 00:04:38.096 CC lib/json/json_util.o 00:04:38.096 CC lib/env_dpdk/memory.o 00:04:38.096 CC lib/idxd/idxd_kernel.o 00:04:38.096 CC lib/json/json_write.o 00:04:38.096 CC lib/env_dpdk/pci.o 00:04:38.096 CC lib/env_dpdk/init.o 00:04:38.096 CC lib/env_dpdk/threads.o 00:04:38.096 CC lib/env_dpdk/pci_ioat.o 00:04:38.096 CC lib/env_dpdk/pci_virtio.o 00:04:38.096 CC lib/env_dpdk/pci_vmd.o 00:04:38.096 CC lib/env_dpdk/pci_idxd.o 00:04:38.096 CC lib/env_dpdk/pci_event.o 00:04:38.096 CC lib/env_dpdk/sigbus_handler.o 00:04:38.096 CC lib/env_dpdk/pci_dpdk.o 00:04:38.096 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:38.096 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:38.096 LIB libspdk_trace_parser.a 00:04:38.096 SO libspdk_trace_parser.so.6.0 00:04:38.096 LIB libspdk_conf.a 00:04:38.096 SYMLINK libspdk_trace_parser.so 00:04:38.096 SO libspdk_conf.so.6.0 00:04:38.096 LIB libspdk_rdma_utils.a 00:04:38.096 SYMLINK libspdk_conf.so 00:04:38.096 SO libspdk_rdma_utils.so.1.0 00:04:38.096 LIB libspdk_json.a 00:04:38.096 SO libspdk_json.so.6.0 00:04:38.096 SYMLINK libspdk_rdma_utils.so 00:04:38.096 SYMLINK libspdk_json.so 00:04:38.096 CC lib/rdma_provider/common.o 00:04:38.096 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:38.096 CC lib/jsonrpc/jsonrpc_server.o 00:04:38.096 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:38.096 CC lib/jsonrpc/jsonrpc_client.o 00:04:38.096 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:38.096 LIB libspdk_vmd.a 00:04:38.096 LIB libspdk_idxd.a 00:04:38.096 SO libspdk_vmd.so.6.0 00:04:38.096 SO libspdk_idxd.so.12.1 00:04:38.096 SYMLINK libspdk_vmd.so 00:04:38.096 LIB libspdk_rdma_provider.a 00:04:38.096 SYMLINK libspdk_idxd.so 00:04:38.096 SO libspdk_rdma_provider.so.7.0 00:04:38.096 LIB libspdk_jsonrpc.a 00:04:38.096 SYMLINK libspdk_rdma_provider.so 00:04:38.096 SO libspdk_jsonrpc.so.6.0 00:04:38.096 SYMLINK libspdk_jsonrpc.so 00:04:38.096 CC lib/rpc/rpc.o 00:04:38.096 LIB libspdk_rpc.a 00:04:38.096 SO libspdk_rpc.so.6.0 00:04:38.096 SYMLINK libspdk_rpc.so 00:04:38.096 CC lib/trace/trace.o 00:04:38.096 CC lib/trace/trace_flags.o 00:04:38.096 CC lib/trace/trace_rpc.o 00:04:38.096 CC lib/notify/notify.o 00:04:38.096 CC lib/keyring/keyring.o 00:04:38.096 CC lib/notify/notify_rpc.o 00:04:38.096 CC lib/keyring/keyring_rpc.o 00:04:38.096 LIB libspdk_notify.a 00:04:38.096 SO libspdk_notify.so.6.0 00:04:38.096 LIB libspdk_keyring.a 00:04:38.096 SYMLINK libspdk_notify.so 00:04:38.096 LIB libspdk_trace.a 00:04:38.096 SO libspdk_keyring.so.2.0 00:04:38.096 SO libspdk_trace.so.11.0 00:04:38.096 SYMLINK libspdk_keyring.so 00:04:38.096 SYMLINK libspdk_trace.so 00:04:38.096 CC lib/thread/thread.o 00:04:38.096 CC lib/thread/iobuf.o 00:04:38.096 CC lib/sock/sock.o 00:04:38.096 CC lib/sock/sock_rpc.o 00:04:38.096 LIB libspdk_env_dpdk.a 00:04:38.096 SO libspdk_env_dpdk.so.15.1 00:04:38.354 SYMLINK libspdk_env_dpdk.so 00:04:38.613 LIB libspdk_sock.a 00:04:38.613 SO libspdk_sock.so.10.0 00:04:38.613 SYMLINK libspdk_sock.so 00:04:38.871 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:38.871 CC lib/nvme/nvme_ctrlr.o 00:04:38.871 CC lib/nvme/nvme_fabric.o 00:04:38.871 CC lib/nvme/nvme_ns_cmd.o 00:04:38.871 CC lib/nvme/nvme_ns.o 00:04:38.871 CC lib/nvme/nvme_pcie_common.o 00:04:38.871 CC lib/nvme/nvme_pcie.o 00:04:38.871 CC lib/nvme/nvme_qpair.o 00:04:38.871 CC lib/nvme/nvme.o 00:04:38.871 CC lib/nvme/nvme_quirks.o 00:04:38.871 CC lib/nvme/nvme_transport.o 00:04:38.871 CC lib/nvme/nvme_discovery.o 00:04:38.871 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:38.871 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:38.871 CC lib/nvme/nvme_tcp.o 00:04:38.871 CC lib/nvme/nvme_opal.o 00:04:38.871 CC lib/nvme/nvme_io_msg.o 00:04:38.871 CC lib/nvme/nvme_poll_group.o 00:04:38.871 CC lib/nvme/nvme_zns.o 00:04:38.871 CC lib/nvme/nvme_stubs.o 00:04:38.871 CC lib/nvme/nvme_auth.o 00:04:38.871 CC lib/nvme/nvme_cuse.o 00:04:38.871 CC lib/nvme/nvme_rdma.o 00:04:38.871 CC lib/nvme/nvme_vfio_user.o 00:04:39.803 LIB libspdk_thread.a 00:04:39.803 SO libspdk_thread.so.11.0 00:04:39.803 SYMLINK libspdk_thread.so 00:04:40.062 CC lib/vfu_tgt/tgt_endpoint.o 00:04:40.062 CC lib/blob/blobstore.o 00:04:40.062 CC lib/accel/accel.o 00:04:40.062 CC lib/fsdev/fsdev.o 00:04:40.062 CC lib/blob/request.o 00:04:40.062 CC lib/virtio/virtio.o 00:04:40.062 CC lib/vfu_tgt/tgt_rpc.o 00:04:40.062 CC lib/init/json_config.o 00:04:40.062 CC lib/accel/accel_rpc.o 00:04:40.062 CC lib/fsdev/fsdev_io.o 00:04:40.062 CC lib/accel/accel_sw.o 00:04:40.062 CC lib/blob/zeroes.o 00:04:40.062 CC lib/fsdev/fsdev_rpc.o 00:04:40.062 CC lib/init/subsystem.o 00:04:40.062 CC lib/virtio/virtio_vhost_user.o 00:04:40.062 CC lib/virtio/virtio_vfio_user.o 00:04:40.062 CC lib/init/subsystem_rpc.o 00:04:40.062 CC lib/virtio/virtio_pci.o 00:04:40.062 CC lib/blob/blob_bs_dev.o 00:04:40.062 CC lib/init/rpc.o 00:04:40.320 LIB libspdk_vfu_tgt.a 00:04:40.320 LIB libspdk_init.a 00:04:40.320 LIB libspdk_virtio.a 00:04:40.320 SO libspdk_vfu_tgt.so.3.0 00:04:40.320 SO libspdk_init.so.6.0 00:04:40.320 SO libspdk_virtio.so.7.0 00:04:40.577 SYMLINK libspdk_vfu_tgt.so 00:04:40.577 SYMLINK libspdk_init.so 00:04:40.577 SYMLINK libspdk_virtio.so 00:04:40.577 CC lib/event/app.o 00:04:40.577 CC lib/event/reactor.o 00:04:40.577 CC lib/event/log_rpc.o 00:04:40.577 CC lib/event/app_rpc.o 00:04:40.577 CC lib/event/scheduler_static.o 00:04:40.835 LIB libspdk_fsdev.a 00:04:40.835 SO libspdk_fsdev.so.2.0 00:04:40.835 SYMLINK libspdk_fsdev.so 00:04:40.835 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:41.092 LIB libspdk_event.a 00:04:41.092 SO libspdk_event.so.14.0 00:04:41.092 LIB libspdk_accel.a 00:04:41.092 SYMLINK libspdk_event.so 00:04:41.092 SO libspdk_accel.so.16.0 00:04:41.350 SYMLINK libspdk_accel.so 00:04:41.350 LIB libspdk_nvme.a 00:04:41.350 CC lib/bdev/bdev.o 00:04:41.350 CC lib/bdev/bdev_rpc.o 00:04:41.350 CC lib/bdev/bdev_zone.o 00:04:41.350 CC lib/bdev/part.o 00:04:41.350 CC lib/bdev/scsi_nvme.o 00:04:41.350 SO libspdk_nvme.so.15.0 00:04:41.608 LIB libspdk_fuse_dispatcher.a 00:04:41.608 SO libspdk_fuse_dispatcher.so.1.0 00:04:41.608 SYMLINK libspdk_nvme.so 00:04:41.608 SYMLINK libspdk_fuse_dispatcher.so 00:04:43.512 LIB libspdk_blob.a 00:04:43.512 SO libspdk_blob.so.11.0 00:04:43.512 SYMLINK libspdk_blob.so 00:04:43.512 CC lib/blobfs/blobfs.o 00:04:43.512 CC lib/lvol/lvol.o 00:04:43.512 CC lib/blobfs/tree.o 00:04:44.079 LIB libspdk_bdev.a 00:04:44.079 SO libspdk_bdev.so.17.0 00:04:44.079 SYMLINK libspdk_bdev.so 00:04:44.079 LIB libspdk_blobfs.a 00:04:44.346 SO libspdk_blobfs.so.10.0 00:04:44.346 SYMLINK libspdk_blobfs.so 00:04:44.346 CC lib/nbd/nbd.o 00:04:44.346 CC lib/scsi/dev.o 00:04:44.346 CC lib/nvmf/ctrlr.o 00:04:44.346 CC lib/nbd/nbd_rpc.o 00:04:44.346 CC lib/scsi/lun.o 00:04:44.346 CC lib/ublk/ublk.o 00:04:44.346 CC lib/ftl/ftl_core.o 00:04:44.346 CC lib/nvmf/ctrlr_discovery.o 00:04:44.346 CC lib/ublk/ublk_rpc.o 00:04:44.346 CC lib/scsi/port.o 00:04:44.346 CC lib/nvmf/ctrlr_bdev.o 00:04:44.346 CC lib/scsi/scsi.o 00:04:44.346 CC lib/ftl/ftl_init.o 00:04:44.346 CC lib/ftl/ftl_layout.o 00:04:44.346 CC lib/scsi/scsi_bdev.o 00:04:44.346 CC lib/nvmf/subsystem.o 00:04:44.346 CC lib/ftl/ftl_debug.o 00:04:44.346 CC lib/nvmf/nvmf.o 00:04:44.346 CC lib/scsi/scsi_pr.o 00:04:44.346 CC lib/ftl/ftl_io.o 00:04:44.346 CC lib/scsi/scsi_rpc.o 00:04:44.346 CC lib/nvmf/nvmf_rpc.o 00:04:44.346 CC lib/ftl/ftl_sb.o 00:04:44.346 CC lib/scsi/task.o 00:04:44.346 CC lib/nvmf/transport.o 00:04:44.346 CC lib/ftl/ftl_l2p.o 00:04:44.346 CC lib/ftl/ftl_l2p_flat.o 00:04:44.346 CC lib/nvmf/tcp.o 00:04:44.346 CC lib/nvmf/stubs.o 00:04:44.346 CC lib/ftl/ftl_nv_cache.o 00:04:44.346 CC lib/ftl/ftl_band.o 00:04:44.346 CC lib/nvmf/mdns_server.o 00:04:44.346 CC lib/nvmf/vfio_user.o 00:04:44.346 CC lib/ftl/ftl_band_ops.o 00:04:44.346 CC lib/nvmf/rdma.o 00:04:44.346 CC lib/ftl/ftl_writer.o 00:04:44.346 CC lib/nvmf/auth.o 00:04:44.346 CC lib/ftl/ftl_rq.o 00:04:44.346 CC lib/ftl/ftl_reloc.o 00:04:44.346 CC lib/ftl/ftl_l2p_cache.o 00:04:44.346 CC lib/ftl/ftl_p2l.o 00:04:44.346 CC lib/ftl/ftl_p2l_log.o 00:04:44.346 CC lib/ftl/mngt/ftl_mngt.o 00:04:44.346 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:44.346 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:44.346 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:44.346 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:44.346 LIB libspdk_lvol.a 00:04:44.346 SO libspdk_lvol.so.10.0 00:04:44.608 SYMLINK libspdk_lvol.so 00:04:44.608 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:44.608 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:44.608 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:44.608 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:44.608 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:44.608 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:44.608 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:44.875 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:44.875 CC lib/ftl/utils/ftl_conf.o 00:04:44.875 CC lib/ftl/utils/ftl_md.o 00:04:44.875 CC lib/ftl/utils/ftl_mempool.o 00:04:44.875 CC lib/ftl/utils/ftl_bitmap.o 00:04:44.875 CC lib/ftl/utils/ftl_property.o 00:04:44.875 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:44.875 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:44.875 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:44.875 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:44.875 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:44.875 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:44.875 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:44.875 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:44.875 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:44.875 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:45.134 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:45.135 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:45.135 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:45.135 CC lib/ftl/base/ftl_base_dev.o 00:04:45.135 CC lib/ftl/base/ftl_base_bdev.o 00:04:45.135 CC lib/ftl/ftl_trace.o 00:04:45.135 LIB libspdk_nbd.a 00:04:45.135 SO libspdk_nbd.so.7.0 00:04:45.135 SYMLINK libspdk_nbd.so 00:04:45.393 LIB libspdk_scsi.a 00:04:45.394 SO libspdk_scsi.so.9.0 00:04:45.394 SYMLINK libspdk_scsi.so 00:04:45.394 LIB libspdk_ublk.a 00:04:45.394 SO libspdk_ublk.so.3.0 00:04:45.653 SYMLINK libspdk_ublk.so 00:04:45.653 CC lib/vhost/vhost.o 00:04:45.653 CC lib/vhost/vhost_rpc.o 00:04:45.653 CC lib/vhost/vhost_scsi.o 00:04:45.653 CC lib/iscsi/conn.o 00:04:45.653 CC lib/iscsi/init_grp.o 00:04:45.653 CC lib/vhost/vhost_blk.o 00:04:45.653 CC lib/vhost/rte_vhost_user.o 00:04:45.653 CC lib/iscsi/iscsi.o 00:04:45.653 CC lib/iscsi/param.o 00:04:45.653 CC lib/iscsi/portal_grp.o 00:04:45.653 CC lib/iscsi/tgt_node.o 00:04:45.653 CC lib/iscsi/iscsi_subsystem.o 00:04:45.653 CC lib/iscsi/iscsi_rpc.o 00:04:45.653 CC lib/iscsi/task.o 00:04:45.912 LIB libspdk_ftl.a 00:04:45.912 SO libspdk_ftl.so.9.0 00:04:46.170 SYMLINK libspdk_ftl.so 00:04:46.738 LIB libspdk_vhost.a 00:04:46.996 SO libspdk_vhost.so.8.0 00:04:46.996 LIB libspdk_nvmf.a 00:04:46.996 SYMLINK libspdk_vhost.so 00:04:46.996 SO libspdk_nvmf.so.20.0 00:04:46.996 LIB libspdk_iscsi.a 00:04:47.254 SO libspdk_iscsi.so.8.0 00:04:47.254 SYMLINK libspdk_nvmf.so 00:04:47.254 SYMLINK libspdk_iscsi.so 00:04:47.512 CC module/vfu_device/vfu_virtio.o 00:04:47.512 CC module/vfu_device/vfu_virtio_blk.o 00:04:47.512 CC module/vfu_device/vfu_virtio_scsi.o 00:04:47.512 CC module/vfu_device/vfu_virtio_rpc.o 00:04:47.512 CC module/env_dpdk/env_dpdk_rpc.o 00:04:47.512 CC module/vfu_device/vfu_virtio_fs.o 00:04:47.512 CC module/accel/error/accel_error.o 00:04:47.512 CC module/accel/error/accel_error_rpc.o 00:04:47.512 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:47.512 CC module/accel/dsa/accel_dsa.o 00:04:47.512 CC module/accel/dsa/accel_dsa_rpc.o 00:04:47.512 CC module/keyring/linux/keyring.o 00:04:47.512 CC module/keyring/linux/keyring_rpc.o 00:04:47.512 CC module/sock/posix/posix.o 00:04:47.512 CC module/fsdev/aio/fsdev_aio.o 00:04:47.512 CC module/accel/ioat/accel_ioat.o 00:04:47.512 CC module/accel/ioat/accel_ioat_rpc.o 00:04:47.512 CC module/blob/bdev/blob_bdev.o 00:04:47.512 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:47.512 CC module/fsdev/aio/linux_aio_mgr.o 00:04:47.512 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:47.512 CC module/keyring/file/keyring_rpc.o 00:04:47.512 CC module/scheduler/gscheduler/gscheduler.o 00:04:47.512 CC module/keyring/file/keyring.o 00:04:47.512 CC module/accel/iaa/accel_iaa.o 00:04:47.512 CC module/accel/iaa/accel_iaa_rpc.o 00:04:47.771 LIB libspdk_env_dpdk_rpc.a 00:04:47.771 SO libspdk_env_dpdk_rpc.so.6.0 00:04:47.771 SYMLINK libspdk_env_dpdk_rpc.so 00:04:47.771 LIB libspdk_keyring_linux.a 00:04:47.771 LIB libspdk_scheduler_dpdk_governor.a 00:04:47.771 SO libspdk_keyring_linux.so.1.0 00:04:47.771 LIB libspdk_keyring_file.a 00:04:47.771 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:47.771 LIB libspdk_accel_ioat.a 00:04:47.771 SO libspdk_keyring_file.so.2.0 00:04:47.771 LIB libspdk_accel_error.a 00:04:47.771 LIB libspdk_scheduler_gscheduler.a 00:04:47.771 SYMLINK libspdk_keyring_linux.so 00:04:47.771 SO libspdk_accel_ioat.so.6.0 00:04:47.771 LIB libspdk_accel_iaa.a 00:04:47.771 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:47.771 SO libspdk_scheduler_gscheduler.so.4.0 00:04:47.771 SO libspdk_accel_error.so.2.0 00:04:48.029 SO libspdk_accel_iaa.so.3.0 00:04:48.029 SYMLINK libspdk_keyring_file.so 00:04:48.029 SYMLINK libspdk_accel_ioat.so 00:04:48.029 LIB libspdk_blob_bdev.a 00:04:48.029 LIB libspdk_scheduler_dynamic.a 00:04:48.029 SYMLINK libspdk_scheduler_gscheduler.so 00:04:48.029 SYMLINK libspdk_accel_error.so 00:04:48.029 LIB libspdk_accel_dsa.a 00:04:48.029 SYMLINK libspdk_accel_iaa.so 00:04:48.029 SO libspdk_blob_bdev.so.11.0 00:04:48.029 SO libspdk_scheduler_dynamic.so.4.0 00:04:48.029 SO libspdk_accel_dsa.so.5.0 00:04:48.029 SYMLINK libspdk_blob_bdev.so 00:04:48.029 SYMLINK libspdk_scheduler_dynamic.so 00:04:48.029 SYMLINK libspdk_accel_dsa.so 00:04:48.288 LIB libspdk_vfu_device.a 00:04:48.288 CC module/bdev/gpt/gpt.o 00:04:48.288 CC module/bdev/lvol/vbdev_lvol.o 00:04:48.288 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:48.288 CC module/bdev/gpt/vbdev_gpt.o 00:04:48.288 CC module/bdev/error/vbdev_error.o 00:04:48.288 CC module/bdev/error/vbdev_error_rpc.o 00:04:48.288 CC module/blobfs/bdev/blobfs_bdev.o 00:04:48.288 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:48.288 CC module/bdev/ftl/bdev_ftl.o 00:04:48.288 CC module/bdev/null/bdev_null.o 00:04:48.288 CC module/bdev/nvme/bdev_nvme.o 00:04:48.288 CC module/bdev/delay/vbdev_delay.o 00:04:48.288 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:48.288 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:48.288 CC module/bdev/null/bdev_null_rpc.o 00:04:48.288 CC module/bdev/nvme/nvme_rpc.o 00:04:48.288 CC module/bdev/malloc/bdev_malloc.o 00:04:48.288 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:48.288 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:48.288 CC module/bdev/split/vbdev_split.o 00:04:48.288 CC module/bdev/nvme/bdev_mdns_client.o 00:04:48.288 CC module/bdev/nvme/vbdev_opal.o 00:04:48.288 CC module/bdev/raid/bdev_raid_rpc.o 00:04:48.288 CC module/bdev/raid/bdev_raid.o 00:04:48.288 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:48.288 CC module/bdev/split/vbdev_split_rpc.o 00:04:48.288 CC module/bdev/raid/bdev_raid_sb.o 00:04:48.288 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:48.288 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:48.288 CC module/bdev/aio/bdev_aio.o 00:04:48.288 CC module/bdev/raid/raid0.o 00:04:48.288 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:48.288 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:48.288 CC module/bdev/passthru/vbdev_passthru.o 00:04:48.288 CC module/bdev/aio/bdev_aio_rpc.o 00:04:48.288 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:48.288 CC module/bdev/raid/raid1.o 00:04:48.288 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:48.288 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:48.288 CC module/bdev/raid/concat.o 00:04:48.288 CC module/bdev/iscsi/bdev_iscsi.o 00:04:48.288 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:48.288 SO libspdk_vfu_device.so.3.0 00:04:48.546 SYMLINK libspdk_vfu_device.so 00:04:48.546 LIB libspdk_fsdev_aio.a 00:04:48.546 SO libspdk_fsdev_aio.so.1.0 00:04:48.546 LIB libspdk_sock_posix.a 00:04:48.546 SO libspdk_sock_posix.so.6.0 00:04:48.546 SYMLINK libspdk_fsdev_aio.so 00:04:48.546 LIB libspdk_blobfs_bdev.a 00:04:48.805 SO libspdk_blobfs_bdev.so.6.0 00:04:48.805 SYMLINK libspdk_sock_posix.so 00:04:48.805 LIB libspdk_bdev_null.a 00:04:48.805 SYMLINK libspdk_blobfs_bdev.so 00:04:48.805 LIB libspdk_bdev_split.a 00:04:48.805 SO libspdk_bdev_null.so.6.0 00:04:48.805 SO libspdk_bdev_split.so.6.0 00:04:48.805 LIB libspdk_bdev_gpt.a 00:04:48.805 LIB libspdk_bdev_passthru.a 00:04:48.805 LIB libspdk_bdev_error.a 00:04:48.805 LIB libspdk_bdev_aio.a 00:04:48.805 SO libspdk_bdev_gpt.so.6.0 00:04:48.805 SO libspdk_bdev_passthru.so.6.0 00:04:48.805 SYMLINK libspdk_bdev_null.so 00:04:48.805 SO libspdk_bdev_error.so.6.0 00:04:48.805 LIB libspdk_bdev_ftl.a 00:04:48.805 SYMLINK libspdk_bdev_split.so 00:04:48.805 SO libspdk_bdev_aio.so.6.0 00:04:48.805 SO libspdk_bdev_ftl.so.6.0 00:04:48.805 LIB libspdk_bdev_iscsi.a 00:04:48.805 LIB libspdk_bdev_malloc.a 00:04:48.805 SYMLINK libspdk_bdev_gpt.so 00:04:48.805 SYMLINK libspdk_bdev_passthru.so 00:04:48.805 SYMLINK libspdk_bdev_error.so 00:04:48.805 SO libspdk_bdev_iscsi.so.6.0 00:04:48.805 SYMLINK libspdk_bdev_aio.so 00:04:48.805 LIB libspdk_bdev_zone_block.a 00:04:48.805 SO libspdk_bdev_malloc.so.6.0 00:04:48.805 LIB libspdk_bdev_delay.a 00:04:48.805 SYMLINK libspdk_bdev_ftl.so 00:04:48.805 SO libspdk_bdev_zone_block.so.6.0 00:04:49.063 SO libspdk_bdev_delay.so.6.0 00:04:49.063 SYMLINK libspdk_bdev_iscsi.so 00:04:49.063 SYMLINK libspdk_bdev_malloc.so 00:04:49.063 SYMLINK libspdk_bdev_zone_block.so 00:04:49.063 SYMLINK libspdk_bdev_delay.so 00:04:49.063 LIB libspdk_bdev_lvol.a 00:04:49.063 SO libspdk_bdev_lvol.so.6.0 00:04:49.063 LIB libspdk_bdev_virtio.a 00:04:49.063 SO libspdk_bdev_virtio.so.6.0 00:04:49.063 SYMLINK libspdk_bdev_lvol.so 00:04:49.063 SYMLINK libspdk_bdev_virtio.so 00:04:49.628 LIB libspdk_bdev_raid.a 00:04:49.628 SO libspdk_bdev_raid.so.6.0 00:04:49.628 SYMLINK libspdk_bdev_raid.so 00:04:51.070 LIB libspdk_bdev_nvme.a 00:04:51.070 SO libspdk_bdev_nvme.so.7.1 00:04:51.070 SYMLINK libspdk_bdev_nvme.so 00:04:51.377 CC module/event/subsystems/scheduler/scheduler.o 00:04:51.377 CC module/event/subsystems/iobuf/iobuf.o 00:04:51.377 CC module/event/subsystems/keyring/keyring.o 00:04:51.377 CC module/event/subsystems/sock/sock.o 00:04:51.377 CC module/event/subsystems/fsdev/fsdev.o 00:04:51.377 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:51.377 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:51.377 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:51.377 CC module/event/subsystems/vmd/vmd.o 00:04:51.377 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:51.658 LIB libspdk_event_keyring.a 00:04:51.658 LIB libspdk_event_vhost_blk.a 00:04:51.658 LIB libspdk_event_vfu_tgt.a 00:04:51.658 LIB libspdk_event_fsdev.a 00:04:51.658 LIB libspdk_event_scheduler.a 00:04:51.658 LIB libspdk_event_vmd.a 00:04:51.658 LIB libspdk_event_sock.a 00:04:51.658 SO libspdk_event_keyring.so.1.0 00:04:51.658 SO libspdk_event_vhost_blk.so.3.0 00:04:51.658 LIB libspdk_event_iobuf.a 00:04:51.658 SO libspdk_event_vfu_tgt.so.3.0 00:04:51.658 SO libspdk_event_fsdev.so.1.0 00:04:51.658 SO libspdk_event_scheduler.so.4.0 00:04:51.658 SO libspdk_event_vmd.so.6.0 00:04:51.658 SO libspdk_event_sock.so.5.0 00:04:51.658 SO libspdk_event_iobuf.so.3.0 00:04:51.658 SYMLINK libspdk_event_keyring.so 00:04:51.658 SYMLINK libspdk_event_vhost_blk.so 00:04:51.658 SYMLINK libspdk_event_fsdev.so 00:04:51.658 SYMLINK libspdk_event_vfu_tgt.so 00:04:51.658 SYMLINK libspdk_event_scheduler.so 00:04:51.658 SYMLINK libspdk_event_sock.so 00:04:51.658 SYMLINK libspdk_event_vmd.so 00:04:51.658 SYMLINK libspdk_event_iobuf.so 00:04:51.917 CC module/event/subsystems/accel/accel.o 00:04:52.177 LIB libspdk_event_accel.a 00:04:52.177 SO libspdk_event_accel.so.6.0 00:04:52.177 SYMLINK libspdk_event_accel.so 00:04:52.435 CC module/event/subsystems/bdev/bdev.o 00:04:52.436 LIB libspdk_event_bdev.a 00:04:52.436 SO libspdk_event_bdev.so.6.0 00:04:52.694 SYMLINK libspdk_event_bdev.so 00:04:52.694 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:52.694 CC module/event/subsystems/scsi/scsi.o 00:04:52.694 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:52.694 CC module/event/subsystems/nbd/nbd.o 00:04:52.694 CC module/event/subsystems/ublk/ublk.o 00:04:52.952 LIB libspdk_event_nbd.a 00:04:52.952 LIB libspdk_event_ublk.a 00:04:52.952 LIB libspdk_event_scsi.a 00:04:52.952 SO libspdk_event_nbd.so.6.0 00:04:52.952 SO libspdk_event_ublk.so.3.0 00:04:52.952 SO libspdk_event_scsi.so.6.0 00:04:52.952 SYMLINK libspdk_event_nbd.so 00:04:52.952 SYMLINK libspdk_event_ublk.so 00:04:52.952 SYMLINK libspdk_event_scsi.so 00:04:52.952 LIB libspdk_event_nvmf.a 00:04:52.952 SO libspdk_event_nvmf.so.6.0 00:04:52.952 SYMLINK libspdk_event_nvmf.so 00:04:53.210 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:53.210 CC module/event/subsystems/iscsi/iscsi.o 00:04:53.210 LIB libspdk_event_vhost_scsi.a 00:04:53.210 SO libspdk_event_vhost_scsi.so.3.0 00:04:53.210 LIB libspdk_event_iscsi.a 00:04:53.469 SO libspdk_event_iscsi.so.6.0 00:04:53.469 SYMLINK libspdk_event_vhost_scsi.so 00:04:53.469 SYMLINK libspdk_event_iscsi.so 00:04:53.469 SO libspdk.so.6.0 00:04:53.469 SYMLINK libspdk.so 00:04:53.736 TEST_HEADER include/spdk/accel.h 00:04:53.736 TEST_HEADER include/spdk/accel_module.h 00:04:53.736 CC app/spdk_nvme_perf/perf.o 00:04:53.736 CXX app/trace/trace.o 00:04:53.736 TEST_HEADER include/spdk/assert.h 00:04:53.736 CC app/spdk_nvme_identify/identify.o 00:04:53.736 TEST_HEADER include/spdk/barrier.h 00:04:53.736 CC app/spdk_top/spdk_top.o 00:04:53.736 CC app/spdk_nvme_discover/discovery_aer.o 00:04:53.736 TEST_HEADER include/spdk/base64.h 00:04:53.736 TEST_HEADER include/spdk/bdev.h 00:04:53.736 TEST_HEADER include/spdk/bdev_module.h 00:04:53.736 CC app/trace_record/trace_record.o 00:04:53.736 TEST_HEADER include/spdk/bdev_zone.h 00:04:53.736 TEST_HEADER include/spdk/bit_array.h 00:04:53.736 CC test/rpc_client/rpc_client_test.o 00:04:53.736 CC app/spdk_lspci/spdk_lspci.o 00:04:53.736 TEST_HEADER include/spdk/bit_pool.h 00:04:53.736 TEST_HEADER include/spdk/blob_bdev.h 00:04:53.736 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:53.736 TEST_HEADER include/spdk/blobfs.h 00:04:53.737 TEST_HEADER include/spdk/blob.h 00:04:53.737 TEST_HEADER include/spdk/conf.h 00:04:53.737 TEST_HEADER include/spdk/config.h 00:04:53.737 TEST_HEADER include/spdk/cpuset.h 00:04:53.737 TEST_HEADER include/spdk/crc16.h 00:04:53.737 TEST_HEADER include/spdk/crc32.h 00:04:53.737 TEST_HEADER include/spdk/crc64.h 00:04:53.737 TEST_HEADER include/spdk/dif.h 00:04:53.737 TEST_HEADER include/spdk/dma.h 00:04:53.737 TEST_HEADER include/spdk/endian.h 00:04:53.737 TEST_HEADER include/spdk/env_dpdk.h 00:04:53.737 TEST_HEADER include/spdk/env.h 00:04:53.737 TEST_HEADER include/spdk/event.h 00:04:53.737 TEST_HEADER include/spdk/fd_group.h 00:04:53.737 TEST_HEADER include/spdk/fd.h 00:04:53.737 TEST_HEADER include/spdk/file.h 00:04:53.737 TEST_HEADER include/spdk/fsdev.h 00:04:53.737 TEST_HEADER include/spdk/ftl.h 00:04:53.737 TEST_HEADER include/spdk/fsdev_module.h 00:04:53.737 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:53.737 TEST_HEADER include/spdk/gpt_spec.h 00:04:53.737 TEST_HEADER include/spdk/hexlify.h 00:04:53.737 TEST_HEADER include/spdk/idxd.h 00:04:53.737 TEST_HEADER include/spdk/histogram_data.h 00:04:53.737 TEST_HEADER include/spdk/idxd_spec.h 00:04:53.737 TEST_HEADER include/spdk/init.h 00:04:53.737 TEST_HEADER include/spdk/ioat.h 00:04:53.737 TEST_HEADER include/spdk/ioat_spec.h 00:04:53.737 TEST_HEADER include/spdk/iscsi_spec.h 00:04:53.737 TEST_HEADER include/spdk/json.h 00:04:53.737 TEST_HEADER include/spdk/jsonrpc.h 00:04:53.737 TEST_HEADER include/spdk/likely.h 00:04:53.737 TEST_HEADER include/spdk/keyring_module.h 00:04:53.737 TEST_HEADER include/spdk/keyring.h 00:04:53.737 TEST_HEADER include/spdk/log.h 00:04:53.737 TEST_HEADER include/spdk/lvol.h 00:04:53.737 TEST_HEADER include/spdk/md5.h 00:04:53.737 TEST_HEADER include/spdk/memory.h 00:04:53.737 TEST_HEADER include/spdk/mmio.h 00:04:53.737 TEST_HEADER include/spdk/nbd.h 00:04:53.737 TEST_HEADER include/spdk/net.h 00:04:53.737 TEST_HEADER include/spdk/notify.h 00:04:53.737 TEST_HEADER include/spdk/nvme.h 00:04:53.737 TEST_HEADER include/spdk/nvme_intel.h 00:04:53.737 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:53.737 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:53.737 TEST_HEADER include/spdk/nvme_spec.h 00:04:53.737 TEST_HEADER include/spdk/nvme_zns.h 00:04:53.737 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:53.737 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:53.737 TEST_HEADER include/spdk/nvmf.h 00:04:53.737 TEST_HEADER include/spdk/nvmf_spec.h 00:04:53.737 TEST_HEADER include/spdk/nvmf_transport.h 00:04:53.737 TEST_HEADER include/spdk/opal.h 00:04:53.737 TEST_HEADER include/spdk/opal_spec.h 00:04:53.737 TEST_HEADER include/spdk/pci_ids.h 00:04:53.737 TEST_HEADER include/spdk/pipe.h 00:04:53.737 TEST_HEADER include/spdk/queue.h 00:04:53.737 TEST_HEADER include/spdk/reduce.h 00:04:53.737 TEST_HEADER include/spdk/rpc.h 00:04:53.737 TEST_HEADER include/spdk/scheduler.h 00:04:53.737 TEST_HEADER include/spdk/scsi.h 00:04:53.737 TEST_HEADER include/spdk/scsi_spec.h 00:04:53.737 TEST_HEADER include/spdk/sock.h 00:04:53.737 TEST_HEADER include/spdk/stdinc.h 00:04:53.737 TEST_HEADER include/spdk/string.h 00:04:53.737 TEST_HEADER include/spdk/thread.h 00:04:53.737 TEST_HEADER include/spdk/trace.h 00:04:53.737 TEST_HEADER include/spdk/trace_parser.h 00:04:53.737 TEST_HEADER include/spdk/tree.h 00:04:53.737 TEST_HEADER include/spdk/ublk.h 00:04:53.737 TEST_HEADER include/spdk/util.h 00:04:53.737 TEST_HEADER include/spdk/uuid.h 00:04:53.737 TEST_HEADER include/spdk/version.h 00:04:53.737 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:53.737 TEST_HEADER include/spdk/vhost.h 00:04:53.737 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:53.737 TEST_HEADER include/spdk/vmd.h 00:04:53.737 TEST_HEADER include/spdk/xor.h 00:04:53.737 TEST_HEADER include/spdk/zipf.h 00:04:53.737 CXX test/cpp_headers/accel.o 00:04:53.737 CXX test/cpp_headers/accel_module.o 00:04:53.737 CXX test/cpp_headers/barrier.o 00:04:53.737 CXX test/cpp_headers/assert.o 00:04:53.737 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:53.737 CXX test/cpp_headers/base64.o 00:04:53.737 CXX test/cpp_headers/bdev.o 00:04:53.737 CXX test/cpp_headers/bdev_module.o 00:04:53.737 CXX test/cpp_headers/bdev_zone.o 00:04:53.737 CXX test/cpp_headers/bit_array.o 00:04:53.737 CXX test/cpp_headers/bit_pool.o 00:04:53.737 CXX test/cpp_headers/blob_bdev.o 00:04:53.737 CXX test/cpp_headers/blobfs_bdev.o 00:04:53.737 CXX test/cpp_headers/blob.o 00:04:53.737 CXX test/cpp_headers/conf.o 00:04:53.737 CC app/spdk_dd/spdk_dd.o 00:04:53.737 CXX test/cpp_headers/config.o 00:04:53.737 CXX test/cpp_headers/cpuset.o 00:04:53.737 CXX test/cpp_headers/blobfs.o 00:04:53.737 CXX test/cpp_headers/crc16.o 00:04:53.737 CC app/nvmf_tgt/nvmf_main.o 00:04:53.737 CC app/iscsi_tgt/iscsi_tgt.o 00:04:53.737 CC app/spdk_tgt/spdk_tgt.o 00:04:53.737 CXX test/cpp_headers/crc32.o 00:04:53.737 CC examples/ioat/perf/perf.o 00:04:53.737 CC examples/ioat/verify/verify.o 00:04:53.737 CC test/env/memory/memory_ut.o 00:04:53.737 CC test/thread/poller_perf/poller_perf.o 00:04:53.737 CC test/app/stub/stub.o 00:04:53.737 CC examples/util/zipf/zipf.o 00:04:53.737 CC test/env/pci/pci_ut.o 00:04:53.737 CC test/app/jsoncat/jsoncat.o 00:04:53.737 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:53.737 CC test/env/vtophys/vtophys.o 00:04:53.737 CC test/app/histogram_perf/histogram_perf.o 00:04:53.737 CC app/fio/nvme/fio_plugin.o 00:04:54.000 CC test/dma/test_dma/test_dma.o 00:04:54.000 CC app/fio/bdev/fio_plugin.o 00:04:54.000 CC test/app/bdev_svc/bdev_svc.o 00:04:54.000 LINK spdk_lspci 00:04:54.000 CC test/env/mem_callbacks/mem_callbacks.o 00:04:54.000 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:54.260 LINK rpc_client_test 00:04:54.260 LINK spdk_nvme_discover 00:04:54.260 CXX test/cpp_headers/crc64.o 00:04:54.260 LINK histogram_perf 00:04:54.260 LINK interrupt_tgt 00:04:54.260 LINK vtophys 00:04:54.260 CXX test/cpp_headers/dif.o 00:04:54.260 LINK jsoncat 00:04:54.260 CXX test/cpp_headers/dma.o 00:04:54.260 CXX test/cpp_headers/endian.o 00:04:54.260 LINK poller_perf 00:04:54.260 CXX test/cpp_headers/env_dpdk.o 00:04:54.260 LINK env_dpdk_post_init 00:04:54.260 LINK zipf 00:04:54.260 CXX test/cpp_headers/env.o 00:04:54.260 CXX test/cpp_headers/event.o 00:04:54.260 LINK nvmf_tgt 00:04:54.260 CXX test/cpp_headers/fd_group.o 00:04:54.260 CXX test/cpp_headers/fd.o 00:04:54.260 CXX test/cpp_headers/file.o 00:04:54.260 CXX test/cpp_headers/fsdev.o 00:04:54.260 LINK spdk_trace_record 00:04:54.260 CXX test/cpp_headers/fsdev_module.o 00:04:54.260 LINK iscsi_tgt 00:04:54.260 LINK stub 00:04:54.260 CXX test/cpp_headers/ftl.o 00:04:54.260 LINK ioat_perf 00:04:54.260 LINK verify 00:04:54.260 CXX test/cpp_headers/fuse_dispatcher.o 00:04:54.260 CXX test/cpp_headers/gpt_spec.o 00:04:54.260 CXX test/cpp_headers/hexlify.o 00:04:54.260 CXX test/cpp_headers/histogram_data.o 00:04:54.260 LINK spdk_tgt 00:04:54.260 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:54.260 LINK bdev_svc 00:04:54.260 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:54.526 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:54.526 CXX test/cpp_headers/idxd.o 00:04:54.526 CXX test/cpp_headers/idxd_spec.o 00:04:54.526 CXX test/cpp_headers/init.o 00:04:54.526 CXX test/cpp_headers/ioat.o 00:04:54.526 CXX test/cpp_headers/ioat_spec.o 00:04:54.526 CXX test/cpp_headers/iscsi_spec.o 00:04:54.526 CXX test/cpp_headers/json.o 00:04:54.526 LINK spdk_dd 00:04:54.526 CXX test/cpp_headers/jsonrpc.o 00:04:54.526 CXX test/cpp_headers/keyring.o 00:04:54.526 CXX test/cpp_headers/keyring_module.o 00:04:54.526 LINK spdk_trace 00:04:54.526 CXX test/cpp_headers/likely.o 00:04:54.526 CXX test/cpp_headers/log.o 00:04:54.526 CXX test/cpp_headers/lvol.o 00:04:54.526 CXX test/cpp_headers/md5.o 00:04:54.526 CXX test/cpp_headers/memory.o 00:04:54.794 LINK pci_ut 00:04:54.794 CXX test/cpp_headers/mmio.o 00:04:54.794 CXX test/cpp_headers/nbd.o 00:04:54.794 CXX test/cpp_headers/net.o 00:04:54.794 CXX test/cpp_headers/notify.o 00:04:54.794 CXX test/cpp_headers/nvme.o 00:04:54.794 CXX test/cpp_headers/nvme_intel.o 00:04:54.794 CXX test/cpp_headers/nvme_ocssd.o 00:04:54.794 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:54.794 CXX test/cpp_headers/nvme_spec.o 00:04:54.794 CXX test/cpp_headers/nvme_zns.o 00:04:54.794 CXX test/cpp_headers/nvmf_cmd.o 00:04:54.794 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:54.794 LINK nvme_fuzz 00:04:54.794 CC examples/vmd/lsvmd/lsvmd.o 00:04:54.794 CXX test/cpp_headers/nvmf.o 00:04:54.794 CC examples/sock/hello_world/hello_sock.o 00:04:54.794 CXX test/cpp_headers/nvmf_spec.o 00:04:55.057 CC test/event/reactor/reactor.o 00:04:55.057 CXX test/cpp_headers/nvmf_transport.o 00:04:55.057 CC examples/vmd/led/led.o 00:04:55.057 CC test/event/event_perf/event_perf.o 00:04:55.057 CC examples/idxd/perf/perf.o 00:04:55.057 LINK spdk_bdev 00:04:55.057 CXX test/cpp_headers/opal.o 00:04:55.057 CC examples/thread/thread/thread_ex.o 00:04:55.057 CC test/event/reactor_perf/reactor_perf.o 00:04:55.057 LINK spdk_nvme 00:04:55.057 CXX test/cpp_headers/opal_spec.o 00:04:55.057 LINK test_dma 00:04:55.057 CXX test/cpp_headers/pci_ids.o 00:04:55.057 CXX test/cpp_headers/pipe.o 00:04:55.057 CXX test/cpp_headers/queue.o 00:04:55.057 CXX test/cpp_headers/reduce.o 00:04:55.057 CXX test/cpp_headers/rpc.o 00:04:55.057 CC test/event/app_repeat/app_repeat.o 00:04:55.057 CXX test/cpp_headers/scheduler.o 00:04:55.057 CXX test/cpp_headers/scsi.o 00:04:55.057 CXX test/cpp_headers/scsi_spec.o 00:04:55.057 CXX test/cpp_headers/sock.o 00:04:55.057 CXX test/cpp_headers/stdinc.o 00:04:55.057 CXX test/cpp_headers/string.o 00:04:55.057 CXX test/cpp_headers/thread.o 00:04:55.057 CXX test/cpp_headers/trace.o 00:04:55.057 CXX test/cpp_headers/trace_parser.o 00:04:55.057 CXX test/cpp_headers/tree.o 00:04:55.057 CC test/event/scheduler/scheduler.o 00:04:55.057 CXX test/cpp_headers/ublk.o 00:04:55.057 CXX test/cpp_headers/util.o 00:04:55.057 CXX test/cpp_headers/uuid.o 00:04:55.057 CXX test/cpp_headers/version.o 00:04:55.317 CC app/vhost/vhost.o 00:04:55.317 CXX test/cpp_headers/vfio_user_pci.o 00:04:55.317 CXX test/cpp_headers/vfio_user_spec.o 00:04:55.317 CXX test/cpp_headers/vhost.o 00:04:55.317 LINK lsvmd 00:04:55.317 CXX test/cpp_headers/vmd.o 00:04:55.317 CXX test/cpp_headers/xor.o 00:04:55.317 LINK vhost_fuzz 00:04:55.317 CXX test/cpp_headers/zipf.o 00:04:55.317 LINK reactor 00:04:55.317 LINK event_perf 00:04:55.317 LINK led 00:04:55.317 LINK mem_callbacks 00:04:55.317 LINK reactor_perf 00:04:55.317 LINK spdk_nvme_identify 00:04:55.317 LINK spdk_nvme_perf 00:04:55.317 LINK spdk_top 00:04:55.317 LINK app_repeat 00:04:55.317 LINK hello_sock 00:04:55.576 LINK thread 00:04:55.576 LINK vhost 00:04:55.576 LINK idxd_perf 00:04:55.576 LINK scheduler 00:04:55.576 CC test/nvme/reset/reset.o 00:04:55.576 CC test/nvme/e2edp/nvme_dp.o 00:04:55.576 CC test/nvme/reserve/reserve.o 00:04:55.576 CC test/nvme/overhead/overhead.o 00:04:55.576 CC test/nvme/sgl/sgl.o 00:04:55.576 CC test/nvme/simple_copy/simple_copy.o 00:04:55.576 CC test/nvme/connect_stress/connect_stress.o 00:04:55.576 CC test/nvme/aer/aer.o 00:04:55.576 CC test/nvme/err_injection/err_injection.o 00:04:55.576 CC test/nvme/startup/startup.o 00:04:55.576 CC test/nvme/boot_partition/boot_partition.o 00:04:55.576 CC test/nvme/compliance/nvme_compliance.o 00:04:55.576 CC test/nvme/fused_ordering/fused_ordering.o 00:04:55.576 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:55.576 CC test/nvme/fdp/fdp.o 00:04:55.576 CC test/nvme/cuse/cuse.o 00:04:55.576 CC test/blobfs/mkfs/mkfs.o 00:04:55.576 CC test/accel/dif/dif.o 00:04:55.835 CC test/lvol/esnap/esnap.o 00:04:55.835 LINK boot_partition 00:04:55.835 LINK doorbell_aers 00:04:55.835 LINK startup 00:04:55.835 LINK reserve 00:04:55.835 CC examples/nvme/abort/abort.o 00:04:55.835 CC examples/nvme/hotplug/hotplug.o 00:04:55.835 CC examples/nvme/arbitration/arbitration.o 00:04:55.835 CC examples/nvme/reconnect/reconnect.o 00:04:55.835 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:55.835 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:55.835 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:55.835 LINK fused_ordering 00:04:55.835 CC examples/nvme/hello_world/hello_world.o 00:04:56.093 LINK simple_copy 00:04:56.093 LINK mkfs 00:04:56.093 LINK connect_stress 00:04:56.093 LINK nvme_dp 00:04:56.093 LINK err_injection 00:04:56.093 LINK reset 00:04:56.093 LINK overhead 00:04:56.093 LINK aer 00:04:56.093 CC examples/accel/perf/accel_perf.o 00:04:56.093 CC examples/blob/hello_world/hello_blob.o 00:04:56.093 LINK nvme_compliance 00:04:56.093 CC examples/blob/cli/blobcli.o 00:04:56.093 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:56.093 LINK fdp 00:04:56.093 LINK memory_ut 00:04:56.093 LINK sgl 00:04:56.093 LINK pmr_persistence 00:04:56.093 LINK cmb_copy 00:04:56.093 LINK hello_world 00:04:56.359 LINK hotplug 00:04:56.359 LINK hello_blob 00:04:56.359 LINK abort 00:04:56.359 LINK arbitration 00:04:56.359 LINK reconnect 00:04:56.359 LINK hello_fsdev 00:04:56.617 LINK nvme_manage 00:04:56.617 LINK dif 00:04:56.617 LINK accel_perf 00:04:56.617 LINK blobcli 00:04:56.874 LINK iscsi_fuzz 00:04:56.874 CC test/bdev/bdevio/bdevio.o 00:04:57.131 CC examples/bdev/hello_world/hello_bdev.o 00:04:57.132 CC examples/bdev/bdevperf/bdevperf.o 00:04:57.132 LINK cuse 00:04:57.398 LINK hello_bdev 00:04:57.398 LINK bdevio 00:04:57.964 LINK bdevperf 00:04:58.222 CC examples/nvmf/nvmf/nvmf.o 00:04:58.480 LINK nvmf 00:05:01.016 LINK esnap 00:05:01.275 00:05:01.275 real 1m6.506s 00:05:01.275 user 9m4.083s 00:05:01.275 sys 1m58.250s 00:05:01.275 07:38:54 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:01.275 07:38:54 make -- common/autotest_common.sh@10 -- $ set +x 00:05:01.275 ************************************ 00:05:01.275 END TEST make 00:05:01.275 ************************************ 00:05:01.275 07:38:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:01.275 07:38:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:01.275 07:38:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:01.275 07:38:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.275 07:38:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:01.275 07:38:54 -- pm/common@44 -- $ pid=498763 00:05:01.275 07:38:54 -- pm/common@50 -- $ kill -TERM 498763 00:05:01.275 07:38:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.275 07:38:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:01.275 07:38:54 -- pm/common@44 -- $ pid=498765 00:05:01.275 07:38:54 -- pm/common@50 -- $ kill -TERM 498765 00:05:01.275 07:38:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.275 07:38:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:01.275 07:38:54 -- pm/common@44 -- $ pid=498767 00:05:01.275 07:38:54 -- pm/common@50 -- $ kill -TERM 498767 00:05:01.275 07:38:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.275 07:38:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:01.275 07:38:54 -- pm/common@44 -- $ pid=498798 00:05:01.275 07:38:54 -- pm/common@50 -- $ sudo -E kill -TERM 498798 00:05:01.275 07:38:54 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:01.275 07:38:54 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:01.275 07:38:54 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.275 07:38:54 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.275 07:38:54 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.275 07:38:54 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.275 07:38:54 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.275 07:38:54 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.275 07:38:54 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.275 07:38:54 -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.275 07:38:54 -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.275 07:38:54 -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.275 07:38:54 -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.275 07:38:54 -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.275 07:38:54 -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.275 07:38:54 -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.275 07:38:54 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.275 07:38:54 -- scripts/common.sh@344 -- # case "$op" in 00:05:01.275 07:38:54 -- scripts/common.sh@345 -- # : 1 00:05:01.275 07:38:54 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.275 07:38:54 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.275 07:38:54 -- scripts/common.sh@365 -- # decimal 1 00:05:01.275 07:38:54 -- scripts/common.sh@353 -- # local d=1 00:05:01.275 07:38:54 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.275 07:38:54 -- scripts/common.sh@355 -- # echo 1 00:05:01.275 07:38:54 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.275 07:38:54 -- scripts/common.sh@366 -- # decimal 2 00:05:01.275 07:38:54 -- scripts/common.sh@353 -- # local d=2 00:05:01.275 07:38:54 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.275 07:38:54 -- scripts/common.sh@355 -- # echo 2 00:05:01.275 07:38:54 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.275 07:38:54 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.275 07:38:54 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.275 07:38:54 -- scripts/common.sh@368 -- # return 0 00:05:01.275 07:38:54 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.275 07:38:54 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.275 --rc genhtml_branch_coverage=1 00:05:01.275 --rc genhtml_function_coverage=1 00:05:01.275 --rc genhtml_legend=1 00:05:01.275 --rc geninfo_all_blocks=1 00:05:01.275 --rc geninfo_unexecuted_blocks=1 00:05:01.275 00:05:01.275 ' 00:05:01.275 07:38:54 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.275 --rc genhtml_branch_coverage=1 00:05:01.275 --rc genhtml_function_coverage=1 00:05:01.275 --rc genhtml_legend=1 00:05:01.275 --rc geninfo_all_blocks=1 00:05:01.275 --rc geninfo_unexecuted_blocks=1 00:05:01.275 00:05:01.275 ' 00:05:01.275 07:38:54 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.275 --rc genhtml_branch_coverage=1 00:05:01.275 --rc genhtml_function_coverage=1 00:05:01.275 --rc genhtml_legend=1 00:05:01.275 --rc geninfo_all_blocks=1 00:05:01.275 --rc geninfo_unexecuted_blocks=1 00:05:01.275 00:05:01.275 ' 00:05:01.275 07:38:54 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.275 --rc genhtml_branch_coverage=1 00:05:01.275 --rc genhtml_function_coverage=1 00:05:01.275 --rc genhtml_legend=1 00:05:01.275 --rc geninfo_all_blocks=1 00:05:01.275 --rc geninfo_unexecuted_blocks=1 00:05:01.275 00:05:01.275 ' 00:05:01.275 07:38:54 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:01.275 07:38:54 -- nvmf/common.sh@7 -- # uname -s 00:05:01.275 07:38:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.275 07:38:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.275 07:38:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.275 07:38:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.275 07:38:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.275 07:38:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.275 07:38:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.276 07:38:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.276 07:38:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.276 07:38:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.276 07:38:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:01.276 07:38:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:01.276 07:38:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.276 07:38:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.276 07:38:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:01.276 07:38:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.276 07:38:54 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:01.276 07:38:54 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:01.276 07:38:54 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.276 07:38:54 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.276 07:38:54 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.276 07:38:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.276 07:38:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.276 07:38:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.276 07:38:54 -- paths/export.sh@5 -- # export PATH 00:05:01.276 07:38:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.276 07:38:54 -- nvmf/common.sh@51 -- # : 0 00:05:01.276 07:38:54 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:01.276 07:38:54 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:01.276 07:38:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.276 07:38:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.276 07:38:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.276 07:38:54 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:01.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:01.276 07:38:54 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:01.276 07:38:54 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:01.276 07:38:54 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:01.276 07:38:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:01.276 07:38:54 -- spdk/autotest.sh@32 -- # uname -s 00:05:01.276 07:38:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:01.276 07:38:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:01.276 07:38:54 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:01.276 07:38:54 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:01.276 07:38:54 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:01.276 07:38:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:01.276 07:38:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:01.276 07:38:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:01.276 07:38:54 -- spdk/autotest.sh@48 -- # udevadm_pid=579660 00:05:01.276 07:38:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:01.276 07:38:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:01.276 07:38:54 -- pm/common@17 -- # local monitor 00:05:01.276 07:38:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.276 07:38:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.276 07:38:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.276 07:38:54 -- pm/common@21 -- # date +%s 00:05:01.276 07:38:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.276 07:38:54 -- pm/common@21 -- # date +%s 00:05:01.276 07:38:54 -- pm/common@25 -- # sleep 1 00:05:01.276 07:38:54 -- pm/common@21 -- # date +%s 00:05:01.276 07:38:54 -- pm/common@21 -- # date +%s 00:05:01.276 07:38:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731911934 00:05:01.276 07:38:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731911934 00:05:01.276 07:38:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731911934 00:05:01.276 07:38:54 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731911934 00:05:01.536 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731911934_collect-vmstat.pm.log 00:05:01.536 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731911934_collect-cpu-load.pm.log 00:05:01.536 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731911934_collect-cpu-temp.pm.log 00:05:01.536 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731911934_collect-bmc-pm.bmc.pm.log 00:05:02.476 07:38:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:02.476 07:38:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:02.476 07:38:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.476 07:38:55 -- common/autotest_common.sh@10 -- # set +x 00:05:02.476 07:38:55 -- spdk/autotest.sh@59 -- # create_test_list 00:05:02.476 07:38:55 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:02.476 07:38:55 -- common/autotest_common.sh@10 -- # set +x 00:05:02.476 07:38:55 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:02.476 07:38:55 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:02.476 07:38:55 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:02.476 07:38:55 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:02.476 07:38:55 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:02.476 07:38:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:02.476 07:38:55 -- common/autotest_common.sh@1457 -- # uname 00:05:02.476 07:38:55 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:02.476 07:38:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:02.476 07:38:55 -- common/autotest_common.sh@1477 -- # uname 00:05:02.476 07:38:55 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:02.476 07:38:55 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:02.476 07:38:55 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:02.476 lcov: LCOV version 1.15 00:05:02.476 07:38:55 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:34.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:34.578 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:39.855 07:39:32 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:39.855 07:39:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.855 07:39:32 -- common/autotest_common.sh@10 -- # set +x 00:05:39.855 07:39:32 -- spdk/autotest.sh@78 -- # rm -f 00:05:39.855 07:39:32 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:40.790 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:05:40.790 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:40.790 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:40.790 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:40.790 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:40.790 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:40.790 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:40.790 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:40.790 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:40.790 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:40.790 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:40.790 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:40.790 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:40.790 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:40.790 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:40.790 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:40.790 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:41.048 07:39:33 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:41.048 07:39:33 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:41.048 07:39:33 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:41.048 07:39:33 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:41.048 07:39:33 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:41.048 07:39:33 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:41.048 07:39:33 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:41.048 07:39:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:41.048 07:39:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:41.048 07:39:33 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:41.048 07:39:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:41.048 07:39:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:41.048 07:39:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:41.048 07:39:33 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:41.048 07:39:33 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:41.048 No valid GPT data, bailing 00:05:41.048 07:39:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:41.048 07:39:33 -- scripts/common.sh@394 -- # pt= 00:05:41.048 07:39:33 -- scripts/common.sh@395 -- # return 1 00:05:41.048 07:39:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:41.048 1+0 records in 00:05:41.048 1+0 records out 00:05:41.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00197468 s, 531 MB/s 00:05:41.048 07:39:33 -- spdk/autotest.sh@105 -- # sync 00:05:41.048 07:39:33 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:41.048 07:39:33 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:41.048 07:39:33 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:43.580 07:39:36 -- spdk/autotest.sh@111 -- # uname -s 00:05:43.580 07:39:36 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:43.580 07:39:36 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:43.580 07:39:36 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:44.517 Hugepages 00:05:44.517 node hugesize free / total 00:05:44.517 node0 1048576kB 0 / 0 00:05:44.517 node0 2048kB 0 / 0 00:05:44.517 node1 1048576kB 0 / 0 00:05:44.517 node1 2048kB 0 / 0 00:05:44.517 00:05:44.517 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:44.517 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:44.517 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:44.517 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:44.517 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:44.517 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:44.517 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:44.517 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:44.517 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:44.517 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:44.517 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:44.517 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:44.517 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:44.517 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:44.517 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:44.517 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:44.517 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:44.517 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:44.517 07:39:37 -- spdk/autotest.sh@117 -- # uname -s 00:05:44.517 07:39:37 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:44.517 07:39:37 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:44.518 07:39:37 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:45.894 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:45.894 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:45.894 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:45.894 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:45.894 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:45.894 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:45.894 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:45.894 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:45.894 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:45.894 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:45.894 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:45.894 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:45.894 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:45.894 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:45.894 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:45.894 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:46.834 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:46.834 07:39:39 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:47.773 07:39:40 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:47.773 07:39:40 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:47.773 07:39:40 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:47.773 07:39:40 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:47.773 07:39:40 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:47.773 07:39:40 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:47.773 07:39:40 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:47.773 07:39:40 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:47.773 07:39:40 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:48.033 07:39:40 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:48.033 07:39:40 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:48.033 07:39:40 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:48.969 Waiting for block devices as requested 00:05:49.229 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:49.229 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:49.488 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:49.488 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:49.488 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:49.488 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:49.749 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:49.749 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:49.749 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:50.009 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:50.009 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:50.009 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:50.009 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:50.269 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:50.269 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:50.269 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:50.269 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:50.528 07:39:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:50.528 07:39:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:50.528 07:39:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:50.528 07:39:43 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:05:50.528 07:39:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:50.528 07:39:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:50.528 07:39:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:50.528 07:39:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:50.528 07:39:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:50.528 07:39:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:50.528 07:39:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:50.528 07:39:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:50.528 07:39:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:50.528 07:39:43 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:05:50.528 07:39:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:50.528 07:39:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:50.528 07:39:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:50.528 07:39:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:50.528 07:39:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:50.528 07:39:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:50.528 07:39:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:50.528 07:39:43 -- common/autotest_common.sh@1543 -- # continue 00:05:50.528 07:39:43 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:50.528 07:39:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:50.528 07:39:43 -- common/autotest_common.sh@10 -- # set +x 00:05:50.528 07:39:43 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:50.528 07:39:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:50.528 07:39:43 -- common/autotest_common.sh@10 -- # set +x 00:05:50.528 07:39:43 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:51.903 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:51.903 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:51.903 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:51.903 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:51.903 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:51.903 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:51.903 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:51.903 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:51.903 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:51.903 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:51.903 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:51.903 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:51.903 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:51.903 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:51.903 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:51.903 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:52.839 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:53.097 07:39:45 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:53.097 07:39:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:53.097 07:39:45 -- common/autotest_common.sh@10 -- # set +x 00:05:53.097 07:39:45 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:53.097 07:39:45 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:53.097 07:39:45 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:53.097 07:39:45 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:53.097 07:39:45 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:53.097 07:39:45 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:53.097 07:39:45 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:53.097 07:39:45 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:53.097 07:39:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:53.097 07:39:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:53.097 07:39:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:53.097 07:39:45 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:53.097 07:39:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:53.097 07:39:46 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:53.097 07:39:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:53.097 07:39:46 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:53.097 07:39:46 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:53.097 07:39:46 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:53.097 07:39:46 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:53.097 07:39:46 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:53.097 07:39:46 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:53.097 07:39:46 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:05:53.097 07:39:46 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:05:53.097 07:39:46 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=590937 00:05:53.097 07:39:46 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.097 07:39:46 -- common/autotest_common.sh@1585 -- # waitforlisten 590937 00:05:53.097 07:39:46 -- common/autotest_common.sh@835 -- # '[' -z 590937 ']' 00:05:53.097 07:39:46 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.097 07:39:46 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.097 07:39:46 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.097 07:39:46 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.097 07:39:46 -- common/autotest_common.sh@10 -- # set +x 00:05:53.097 [2024-11-18 07:39:46.099837] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:53.097 [2024-11-18 07:39:46.099930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid590937 ] 00:05:53.097 [2024-11-18 07:39:46.166089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.356 [2024-11-18 07:39:46.212178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.613 07:39:46 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.613 07:39:46 -- common/autotest_common.sh@868 -- # return 0 00:05:53.613 07:39:46 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:53.613 07:39:46 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:53.613 07:39:46 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:56.897 nvme0n1 00:05:56.897 07:39:49 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:56.897 [2024-11-18 07:39:49.798920] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:56.897 [2024-11-18 07:39:49.798962] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:56.897 request: 00:05:56.897 { 00:05:56.897 "nvme_ctrlr_name": "nvme0", 00:05:56.897 "password": "test", 00:05:56.897 "method": "bdev_nvme_opal_revert", 00:05:56.897 "req_id": 1 00:05:56.897 } 00:05:56.897 Got JSON-RPC error response 00:05:56.897 response: 00:05:56.897 { 00:05:56.897 "code": -32603, 00:05:56.897 "message": "Internal error" 00:05:56.897 } 00:05:56.897 07:39:49 -- common/autotest_common.sh@1591 -- # true 00:05:56.897 07:39:49 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:56.897 07:39:49 -- common/autotest_common.sh@1595 -- # killprocess 590937 00:05:56.897 07:39:49 -- common/autotest_common.sh@954 -- # '[' -z 590937 ']' 00:05:56.897 07:39:49 -- common/autotest_common.sh@958 -- # kill -0 590937 00:05:56.897 07:39:49 -- common/autotest_common.sh@959 -- # uname 00:05:56.897 07:39:49 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.897 07:39:49 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 590937 00:05:56.897 07:39:49 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.897 07:39:49 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.897 07:39:49 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 590937' 00:05:56.897 killing process with pid 590937 00:05:56.897 07:39:49 -- common/autotest_common.sh@973 -- # kill 590937 00:05:56.897 07:39:49 -- common/autotest_common.sh@978 -- # wait 590937 00:05:58.797 07:39:51 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:58.797 07:39:51 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:58.797 07:39:51 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:58.797 07:39:51 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:58.797 07:39:51 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:58.797 07:39:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:58.797 07:39:51 -- common/autotest_common.sh@10 -- # set +x 00:05:58.797 07:39:51 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:58.797 07:39:51 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:58.797 07:39:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.797 07:39:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.797 07:39:51 -- common/autotest_common.sh@10 -- # set +x 00:05:58.797 ************************************ 00:05:58.797 START TEST env 00:05:58.797 ************************************ 00:05:58.797 07:39:51 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:58.797 * Looking for test storage... 00:05:58.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:58.797 07:39:51 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.797 07:39:51 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.797 07:39:51 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:58.797 07:39:51 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:58.797 07:39:51 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.797 07:39:51 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.797 07:39:51 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.797 07:39:51 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.797 07:39:51 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.797 07:39:51 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.797 07:39:51 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.797 07:39:51 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.798 07:39:51 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.798 07:39:51 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.798 07:39:51 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.798 07:39:51 env -- scripts/common.sh@344 -- # case "$op" in 00:05:58.798 07:39:51 env -- scripts/common.sh@345 -- # : 1 00:05:58.798 07:39:51 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.798 07:39:51 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.798 07:39:51 env -- scripts/common.sh@365 -- # decimal 1 00:05:58.798 07:39:51 env -- scripts/common.sh@353 -- # local d=1 00:05:58.798 07:39:51 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.798 07:39:51 env -- scripts/common.sh@355 -- # echo 1 00:05:58.798 07:39:51 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.798 07:39:51 env -- scripts/common.sh@366 -- # decimal 2 00:05:58.798 07:39:51 env -- scripts/common.sh@353 -- # local d=2 00:05:58.798 07:39:51 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.798 07:39:51 env -- scripts/common.sh@355 -- # echo 2 00:05:58.798 07:39:51 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.798 07:39:51 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.798 07:39:51 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.798 07:39:51 env -- scripts/common.sh@368 -- # return 0 00:05:58.798 07:39:51 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.798 07:39:51 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.798 --rc genhtml_branch_coverage=1 00:05:58.798 --rc genhtml_function_coverage=1 00:05:58.798 --rc genhtml_legend=1 00:05:58.798 --rc geninfo_all_blocks=1 00:05:58.798 --rc geninfo_unexecuted_blocks=1 00:05:58.798 00:05:58.798 ' 00:05:58.798 07:39:51 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.798 --rc genhtml_branch_coverage=1 00:05:58.798 --rc genhtml_function_coverage=1 00:05:58.798 --rc genhtml_legend=1 00:05:58.798 --rc geninfo_all_blocks=1 00:05:58.798 --rc geninfo_unexecuted_blocks=1 00:05:58.798 00:05:58.798 ' 00:05:58.798 07:39:51 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.798 --rc genhtml_branch_coverage=1 00:05:58.798 --rc genhtml_function_coverage=1 00:05:58.798 --rc genhtml_legend=1 00:05:58.798 --rc geninfo_all_blocks=1 00:05:58.798 --rc geninfo_unexecuted_blocks=1 00:05:58.798 00:05:58.798 ' 00:05:58.798 07:39:51 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.798 --rc genhtml_branch_coverage=1 00:05:58.798 --rc genhtml_function_coverage=1 00:05:58.798 --rc genhtml_legend=1 00:05:58.798 --rc geninfo_all_blocks=1 00:05:58.798 --rc geninfo_unexecuted_blocks=1 00:05:58.798 00:05:58.798 ' 00:05:58.798 07:39:51 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:58.798 07:39:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.798 07:39:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.798 07:39:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:58.798 ************************************ 00:05:58.798 START TEST env_memory 00:05:58.798 ************************************ 00:05:58.798 07:39:51 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:58.798 00:05:58.798 00:05:58.798 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.798 http://cunit.sourceforge.net/ 00:05:58.798 00:05:58.798 00:05:58.798 Suite: memory 00:05:58.798 Test: alloc and free memory map ...[2024-11-18 07:39:51.820516] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:58.798 passed 00:05:58.798 Test: mem map translation ...[2024-11-18 07:39:51.842033] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:58.798 [2024-11-18 07:39:51.842055] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:58.798 [2024-11-18 07:39:51.842105] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:58.798 [2024-11-18 07:39:51.842117] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:58.798 passed 00:05:58.798 Test: mem map registration ...[2024-11-18 07:39:51.883480] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:58.798 [2024-11-18 07:39:51.883520] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:59.057 passed 00:05:59.057 Test: mem map adjacent registrations ...passed 00:05:59.057 00:05:59.057 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.057 suites 1 1 n/a 0 0 00:05:59.057 tests 4 4 4 0 0 00:05:59.057 asserts 152 152 152 0 n/a 00:05:59.057 00:05:59.057 Elapsed time = 0.144 seconds 00:05:59.057 00:05:59.057 real 0m0.153s 00:05:59.057 user 0m0.147s 00:05:59.057 sys 0m0.006s 00:05:59.057 07:39:51 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.057 07:39:51 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:59.057 ************************************ 00:05:59.057 END TEST env_memory 00:05:59.057 ************************************ 00:05:59.057 07:39:51 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:59.057 07:39:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.057 07:39:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.057 07:39:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.057 ************************************ 00:05:59.057 START TEST env_vtophys 00:05:59.057 ************************************ 00:05:59.057 07:39:51 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:59.057 EAL: lib.eal log level changed from notice to debug 00:05:59.057 EAL: Detected lcore 0 as core 0 on socket 0 00:05:59.057 EAL: Detected lcore 1 as core 1 on socket 0 00:05:59.057 EAL: Detected lcore 2 as core 2 on socket 0 00:05:59.057 EAL: Detected lcore 3 as core 3 on socket 0 00:05:59.057 EAL: Detected lcore 4 as core 4 on socket 0 00:05:59.057 EAL: Detected lcore 5 as core 5 on socket 0 00:05:59.057 EAL: Detected lcore 6 as core 8 on socket 0 00:05:59.057 EAL: Detected lcore 7 as core 9 on socket 0 00:05:59.057 EAL: Detected lcore 8 as core 10 on socket 0 00:05:59.057 EAL: Detected lcore 9 as core 11 on socket 0 00:05:59.057 EAL: Detected lcore 10 as core 12 on socket 0 00:05:59.057 EAL: Detected lcore 11 as core 13 on socket 0 00:05:59.057 EAL: Detected lcore 12 as core 0 on socket 1 00:05:59.057 EAL: Detected lcore 13 as core 1 on socket 1 00:05:59.057 EAL: Detected lcore 14 as core 2 on socket 1 00:05:59.057 EAL: Detected lcore 15 as core 3 on socket 1 00:05:59.057 EAL: Detected lcore 16 as core 4 on socket 1 00:05:59.057 EAL: Detected lcore 17 as core 5 on socket 1 00:05:59.057 EAL: Detected lcore 18 as core 8 on socket 1 00:05:59.057 EAL: Detected lcore 19 as core 9 on socket 1 00:05:59.057 EAL: Detected lcore 20 as core 10 on socket 1 00:05:59.057 EAL: Detected lcore 21 as core 11 on socket 1 00:05:59.057 EAL: Detected lcore 22 as core 12 on socket 1 00:05:59.057 EAL: Detected lcore 23 as core 13 on socket 1 00:05:59.057 EAL: Detected lcore 24 as core 0 on socket 0 00:05:59.058 EAL: Detected lcore 25 as core 1 on socket 0 00:05:59.058 EAL: Detected lcore 26 as core 2 on socket 0 00:05:59.058 EAL: Detected lcore 27 as core 3 on socket 0 00:05:59.058 EAL: Detected lcore 28 as core 4 on socket 0 00:05:59.058 EAL: Detected lcore 29 as core 5 on socket 0 00:05:59.058 EAL: Detected lcore 30 as core 8 on socket 0 00:05:59.058 EAL: Detected lcore 31 as core 9 on socket 0 00:05:59.058 EAL: Detected lcore 32 as core 10 on socket 0 00:05:59.058 EAL: Detected lcore 33 as core 11 on socket 0 00:05:59.058 EAL: Detected lcore 34 as core 12 on socket 0 00:05:59.058 EAL: Detected lcore 35 as core 13 on socket 0 00:05:59.058 EAL: Detected lcore 36 as core 0 on socket 1 00:05:59.058 EAL: Detected lcore 37 as core 1 on socket 1 00:05:59.058 EAL: Detected lcore 38 as core 2 on socket 1 00:05:59.058 EAL: Detected lcore 39 as core 3 on socket 1 00:05:59.058 EAL: Detected lcore 40 as core 4 on socket 1 00:05:59.058 EAL: Detected lcore 41 as core 5 on socket 1 00:05:59.058 EAL: Detected lcore 42 as core 8 on socket 1 00:05:59.058 EAL: Detected lcore 43 as core 9 on socket 1 00:05:59.058 EAL: Detected lcore 44 as core 10 on socket 1 00:05:59.058 EAL: Detected lcore 45 as core 11 on socket 1 00:05:59.058 EAL: Detected lcore 46 as core 12 on socket 1 00:05:59.058 EAL: Detected lcore 47 as core 13 on socket 1 00:05:59.058 EAL: Maximum logical cores by configuration: 128 00:05:59.058 EAL: Detected CPU lcores: 48 00:05:59.058 EAL: Detected NUMA nodes: 2 00:05:59.058 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:59.058 EAL: Detected shared linkage of DPDK 00:05:59.058 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:59.058 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:59.058 EAL: Registered [vdev] bus. 00:05:59.058 EAL: bus.vdev log level changed from disabled to notice 00:05:59.058 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:59.058 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:59.058 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:59.058 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:59.058 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:59.058 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:59.058 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:59.058 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:59.058 EAL: No shared files mode enabled, IPC will be disabled 00:05:59.058 EAL: No shared files mode enabled, IPC is disabled 00:05:59.058 EAL: Bus pci wants IOVA as 'DC' 00:05:59.058 EAL: Bus vdev wants IOVA as 'DC' 00:05:59.058 EAL: Buses did not request a specific IOVA mode. 00:05:59.058 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:59.058 EAL: Selected IOVA mode 'VA' 00:05:59.058 EAL: Probing VFIO support... 00:05:59.058 EAL: IOMMU type 1 (Type 1) is supported 00:05:59.058 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:59.058 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:59.058 EAL: VFIO support initialized 00:05:59.058 EAL: Ask a virtual area of 0x2e000 bytes 00:05:59.058 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:59.058 EAL: Setting up physically contiguous memory... 00:05:59.058 EAL: Setting maximum number of open files to 524288 00:05:59.058 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:59.058 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:59.058 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:59.058 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.058 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:59.058 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.058 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.058 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:59.058 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:59.058 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.058 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:59.058 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.058 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.058 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:59.058 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:59.058 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.058 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:59.058 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.058 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.058 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:59.058 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:59.058 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.058 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:59.058 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.058 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.058 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:59.058 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:59.058 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:59.058 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.058 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:59.058 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:59.058 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.058 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:59.058 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:59.058 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.058 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:59.058 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:59.058 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.058 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:59.058 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:59.058 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.058 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:59.058 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:59.058 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.058 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:59.058 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:59.058 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.058 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:59.058 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:59.058 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.058 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:59.058 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:59.058 EAL: Hugepages will be freed exactly as allocated. 00:05:59.058 EAL: No shared files mode enabled, IPC is disabled 00:05:59.058 EAL: No shared files mode enabled, IPC is disabled 00:05:59.058 EAL: TSC frequency is ~2700000 KHz 00:05:59.058 EAL: Main lcore 0 is ready (tid=7fa80c232a00;cpuset=[0]) 00:05:59.058 EAL: Trying to obtain current memory policy. 00:05:59.058 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.058 EAL: Restoring previous memory policy: 0 00:05:59.058 EAL: request: mp_malloc_sync 00:05:59.058 EAL: No shared files mode enabled, IPC is disabled 00:05:59.058 EAL: Heap on socket 0 was expanded by 2MB 00:05:59.058 EAL: No shared files mode enabled, IPC is disabled 00:05:59.058 EAL: No shared files mode enabled, IPC is disabled 00:05:59.058 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:59.058 EAL: Mem event callback 'spdk:(nil)' registered 00:05:59.058 00:05:59.058 00:05:59.058 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.058 http://cunit.sourceforge.net/ 00:05:59.058 00:05:59.058 00:05:59.058 Suite: components_suite 00:05:59.058 Test: vtophys_malloc_test ...passed 00:05:59.058 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:59.058 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.058 EAL: Restoring previous memory policy: 4 00:05:59.058 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.058 EAL: request: mp_malloc_sync 00:05:59.058 EAL: No shared files mode enabled, IPC is disabled 00:05:59.058 EAL: Heap on socket 0 was expanded by 4MB 00:05:59.058 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.058 EAL: request: mp_malloc_sync 00:05:59.058 EAL: No shared files mode enabled, IPC is disabled 00:05:59.058 EAL: Heap on socket 0 was shrunk by 4MB 00:05:59.058 EAL: Trying to obtain current memory policy. 00:05:59.058 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.058 EAL: Restoring previous memory policy: 4 00:05:59.058 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.058 EAL: request: mp_malloc_sync 00:05:59.058 EAL: No shared files mode enabled, IPC is disabled 00:05:59.058 EAL: Heap on socket 0 was expanded by 6MB 00:05:59.058 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.058 EAL: request: mp_malloc_sync 00:05:59.058 EAL: No shared files mode enabled, IPC is disabled 00:05:59.058 EAL: Heap on socket 0 was shrunk by 6MB 00:05:59.058 EAL: Trying to obtain current memory policy. 00:05:59.058 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.058 EAL: Restoring previous memory policy: 4 00:05:59.058 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.058 EAL: request: mp_malloc_sync 00:05:59.058 EAL: No shared files mode enabled, IPC is disabled 00:05:59.058 EAL: Heap on socket 0 was expanded by 10MB 00:05:59.058 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.058 EAL: request: mp_malloc_sync 00:05:59.058 EAL: No shared files mode enabled, IPC is disabled 00:05:59.058 EAL: Heap on socket 0 was shrunk by 10MB 00:05:59.058 EAL: Trying to obtain current memory policy. 00:05:59.058 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.058 EAL: Restoring previous memory policy: 4 00:05:59.058 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.058 EAL: request: mp_malloc_sync 00:05:59.058 EAL: No shared files mode enabled, IPC is disabled 00:05:59.058 EAL: Heap on socket 0 was expanded by 18MB 00:05:59.058 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.058 EAL: request: mp_malloc_sync 00:05:59.058 EAL: No shared files mode enabled, IPC is disabled 00:05:59.058 EAL: Heap on socket 0 was shrunk by 18MB 00:05:59.058 EAL: Trying to obtain current memory policy. 00:05:59.059 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.059 EAL: Restoring previous memory policy: 4 00:05:59.059 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.059 EAL: request: mp_malloc_sync 00:05:59.059 EAL: No shared files mode enabled, IPC is disabled 00:05:59.059 EAL: Heap on socket 0 was expanded by 34MB 00:05:59.059 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.059 EAL: request: mp_malloc_sync 00:05:59.059 EAL: No shared files mode enabled, IPC is disabled 00:05:59.059 EAL: Heap on socket 0 was shrunk by 34MB 00:05:59.059 EAL: Trying to obtain current memory policy. 00:05:59.059 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.059 EAL: Restoring previous memory policy: 4 00:05:59.059 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.059 EAL: request: mp_malloc_sync 00:05:59.059 EAL: No shared files mode enabled, IPC is disabled 00:05:59.059 EAL: Heap on socket 0 was expanded by 66MB 00:05:59.059 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.059 EAL: request: mp_malloc_sync 00:05:59.059 EAL: No shared files mode enabled, IPC is disabled 00:05:59.059 EAL: Heap on socket 0 was shrunk by 66MB 00:05:59.059 EAL: Trying to obtain current memory policy. 00:05:59.059 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.317 EAL: Restoring previous memory policy: 4 00:05:59.317 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.317 EAL: request: mp_malloc_sync 00:05:59.317 EAL: No shared files mode enabled, IPC is disabled 00:05:59.317 EAL: Heap on socket 0 was expanded by 130MB 00:05:59.317 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.317 EAL: request: mp_malloc_sync 00:05:59.317 EAL: No shared files mode enabled, IPC is disabled 00:05:59.317 EAL: Heap on socket 0 was shrunk by 130MB 00:05:59.317 EAL: Trying to obtain current memory policy. 00:05:59.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.317 EAL: Restoring previous memory policy: 4 00:05:59.317 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.317 EAL: request: mp_malloc_sync 00:05:59.317 EAL: No shared files mode enabled, IPC is disabled 00:05:59.317 EAL: Heap on socket 0 was expanded by 258MB 00:05:59.317 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.317 EAL: request: mp_malloc_sync 00:05:59.317 EAL: No shared files mode enabled, IPC is disabled 00:05:59.317 EAL: Heap on socket 0 was shrunk by 258MB 00:05:59.317 EAL: Trying to obtain current memory policy. 00:05:59.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.576 EAL: Restoring previous memory policy: 4 00:05:59.576 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.576 EAL: request: mp_malloc_sync 00:05:59.576 EAL: No shared files mode enabled, IPC is disabled 00:05:59.576 EAL: Heap on socket 0 was expanded by 514MB 00:05:59.576 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.834 EAL: request: mp_malloc_sync 00:05:59.834 EAL: No shared files mode enabled, IPC is disabled 00:05:59.834 EAL: Heap on socket 0 was shrunk by 514MB 00:05:59.834 EAL: Trying to obtain current memory policy. 00:05:59.834 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.092 EAL: Restoring previous memory policy: 4 00:06:00.092 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.092 EAL: request: mp_malloc_sync 00:06:00.092 EAL: No shared files mode enabled, IPC is disabled 00:06:00.092 EAL: Heap on socket 0 was expanded by 1026MB 00:06:00.352 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.352 EAL: request: mp_malloc_sync 00:06:00.352 EAL: No shared files mode enabled, IPC is disabled 00:06:00.352 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:00.352 passed 00:06:00.352 00:06:00.352 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.352 suites 1 1 n/a 0 0 00:06:00.352 tests 2 2 2 0 0 00:06:00.352 asserts 497 497 497 0 n/a 00:06:00.352 00:06:00.352 Elapsed time = 1.308 seconds 00:06:00.352 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.352 EAL: request: mp_malloc_sync 00:06:00.352 EAL: No shared files mode enabled, IPC is disabled 00:06:00.352 EAL: Heap on socket 0 was shrunk by 2MB 00:06:00.352 EAL: No shared files mode enabled, IPC is disabled 00:06:00.352 EAL: No shared files mode enabled, IPC is disabled 00:06:00.352 EAL: No shared files mode enabled, IPC is disabled 00:06:00.352 00:06:00.352 real 0m1.421s 00:06:00.352 user 0m0.830s 00:06:00.352 sys 0m0.563s 00:06:00.352 07:39:53 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.352 07:39:53 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:00.352 ************************************ 00:06:00.352 END TEST env_vtophys 00:06:00.352 ************************************ 00:06:00.352 07:39:53 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:00.352 07:39:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.352 07:39:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.352 07:39:53 env -- common/autotest_common.sh@10 -- # set +x 00:06:00.613 ************************************ 00:06:00.613 START TEST env_pci 00:06:00.613 ************************************ 00:06:00.613 07:39:53 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:00.613 00:06:00.613 00:06:00.613 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.613 http://cunit.sourceforge.net/ 00:06:00.613 00:06:00.613 00:06:00.613 Suite: pci 00:06:00.613 Test: pci_hook ...[2024-11-18 07:39:53.470322] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 591838 has claimed it 00:06:00.613 EAL: Cannot find device (10000:00:01.0) 00:06:00.613 EAL: Failed to attach device on primary process 00:06:00.613 passed 00:06:00.613 00:06:00.613 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.613 suites 1 1 n/a 0 0 00:06:00.613 tests 1 1 1 0 0 00:06:00.613 asserts 25 25 25 0 n/a 00:06:00.613 00:06:00.613 Elapsed time = 0.022 seconds 00:06:00.613 00:06:00.613 real 0m0.035s 00:06:00.613 user 0m0.017s 00:06:00.613 sys 0m0.018s 00:06:00.613 07:39:53 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.613 07:39:53 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:00.613 ************************************ 00:06:00.613 END TEST env_pci 00:06:00.613 ************************************ 00:06:00.613 07:39:53 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:00.613 07:39:53 env -- env/env.sh@15 -- # uname 00:06:00.613 07:39:53 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:00.613 07:39:53 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:00.613 07:39:53 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:00.613 07:39:53 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:00.613 07:39:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.613 07:39:53 env -- common/autotest_common.sh@10 -- # set +x 00:06:00.613 ************************************ 00:06:00.613 START TEST env_dpdk_post_init 00:06:00.613 ************************************ 00:06:00.613 07:39:53 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:00.613 EAL: Detected CPU lcores: 48 00:06:00.613 EAL: Detected NUMA nodes: 2 00:06:00.613 EAL: Detected shared linkage of DPDK 00:06:00.613 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:00.613 EAL: Selected IOVA mode 'VA' 00:06:00.613 EAL: VFIO support initialized 00:06:00.613 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:00.613 EAL: Using IOMMU type 1 (Type 1) 00:06:00.613 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:06:00.613 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:06:00.613 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:06:00.613 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:06:00.873 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:06:00.873 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:06:00.873 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:06:00.873 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:06:00.873 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:06:00.873 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:06:00.873 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:06:00.873 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:06:00.873 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:06:00.873 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:06:00.873 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:06:00.873 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:06:01.813 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:06:05.095 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:06:05.095 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:06:05.095 Starting DPDK initialization... 00:06:05.095 Starting SPDK post initialization... 00:06:05.095 SPDK NVMe probe 00:06:05.095 Attaching to 0000:88:00.0 00:06:05.095 Attached to 0000:88:00.0 00:06:05.096 Cleaning up... 00:06:05.096 00:06:05.096 real 0m4.381s 00:06:05.096 user 0m3.271s 00:06:05.096 sys 0m0.171s 00:06:05.096 07:39:57 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.096 07:39:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:05.096 ************************************ 00:06:05.096 END TEST env_dpdk_post_init 00:06:05.096 ************************************ 00:06:05.096 07:39:57 env -- env/env.sh@26 -- # uname 00:06:05.096 07:39:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:05.096 07:39:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:05.096 07:39:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.096 07:39:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.096 07:39:57 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.096 ************************************ 00:06:05.096 START TEST env_mem_callbacks 00:06:05.096 ************************************ 00:06:05.096 07:39:57 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:05.096 EAL: Detected CPU lcores: 48 00:06:05.096 EAL: Detected NUMA nodes: 2 00:06:05.096 EAL: Detected shared linkage of DPDK 00:06:05.096 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:05.096 EAL: Selected IOVA mode 'VA' 00:06:05.096 EAL: VFIO support initialized 00:06:05.096 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:05.096 00:06:05.096 00:06:05.096 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.096 http://cunit.sourceforge.net/ 00:06:05.096 00:06:05.096 00:06:05.096 Suite: memory 00:06:05.096 Test: test ... 00:06:05.096 register 0x200000200000 2097152 00:06:05.096 malloc 3145728 00:06:05.096 register 0x200000400000 4194304 00:06:05.096 buf 0x200000500000 len 3145728 PASSED 00:06:05.096 malloc 64 00:06:05.096 buf 0x2000004fff40 len 64 PASSED 00:06:05.096 malloc 4194304 00:06:05.096 register 0x200000800000 6291456 00:06:05.096 buf 0x200000a00000 len 4194304 PASSED 00:06:05.096 free 0x200000500000 3145728 00:06:05.096 free 0x2000004fff40 64 00:06:05.096 unregister 0x200000400000 4194304 PASSED 00:06:05.096 free 0x200000a00000 4194304 00:06:05.096 unregister 0x200000800000 6291456 PASSED 00:06:05.096 malloc 8388608 00:06:05.096 register 0x200000400000 10485760 00:06:05.096 buf 0x200000600000 len 8388608 PASSED 00:06:05.096 free 0x200000600000 8388608 00:06:05.096 unregister 0x200000400000 10485760 PASSED 00:06:05.096 passed 00:06:05.096 00:06:05.096 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.096 suites 1 1 n/a 0 0 00:06:05.096 tests 1 1 1 0 0 00:06:05.096 asserts 15 15 15 0 n/a 00:06:05.096 00:06:05.096 Elapsed time = 0.005 seconds 00:06:05.096 00:06:05.096 real 0m0.049s 00:06:05.096 user 0m0.017s 00:06:05.096 sys 0m0.032s 00:06:05.096 07:39:58 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.096 07:39:58 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:05.096 ************************************ 00:06:05.096 END TEST env_mem_callbacks 00:06:05.096 ************************************ 00:06:05.096 00:06:05.096 real 0m6.435s 00:06:05.096 user 0m4.479s 00:06:05.096 sys 0m1.011s 00:06:05.096 07:39:58 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.096 07:39:58 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.096 ************************************ 00:06:05.096 END TEST env 00:06:05.096 ************************************ 00:06:05.096 07:39:58 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:05.096 07:39:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.096 07:39:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.096 07:39:58 -- common/autotest_common.sh@10 -- # set +x 00:06:05.096 ************************************ 00:06:05.096 START TEST rpc 00:06:05.096 ************************************ 00:06:05.096 07:39:58 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:05.096 * Looking for test storage... 00:06:05.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:05.096 07:39:58 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:05.096 07:39:58 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:05.096 07:39:58 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:05.355 07:39:58 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:05.355 07:39:58 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.355 07:39:58 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.355 07:39:58 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.355 07:39:58 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.355 07:39:58 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.355 07:39:58 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.355 07:39:58 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.355 07:39:58 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.355 07:39:58 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.355 07:39:58 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.355 07:39:58 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.355 07:39:58 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:05.355 07:39:58 rpc -- scripts/common.sh@345 -- # : 1 00:06:05.355 07:39:58 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.355 07:39:58 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.355 07:39:58 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:05.355 07:39:58 rpc -- scripts/common.sh@353 -- # local d=1 00:06:05.355 07:39:58 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.355 07:39:58 rpc -- scripts/common.sh@355 -- # echo 1 00:06:05.355 07:39:58 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.355 07:39:58 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:05.355 07:39:58 rpc -- scripts/common.sh@353 -- # local d=2 00:06:05.355 07:39:58 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.355 07:39:58 rpc -- scripts/common.sh@355 -- # echo 2 00:06:05.355 07:39:58 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.355 07:39:58 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.355 07:39:58 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.355 07:39:58 rpc -- scripts/common.sh@368 -- # return 0 00:06:05.355 07:39:58 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.355 07:39:58 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:05.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.355 --rc genhtml_branch_coverage=1 00:06:05.355 --rc genhtml_function_coverage=1 00:06:05.355 --rc genhtml_legend=1 00:06:05.355 --rc geninfo_all_blocks=1 00:06:05.355 --rc geninfo_unexecuted_blocks=1 00:06:05.355 00:06:05.355 ' 00:06:05.355 07:39:58 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:05.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.355 --rc genhtml_branch_coverage=1 00:06:05.355 --rc genhtml_function_coverage=1 00:06:05.355 --rc genhtml_legend=1 00:06:05.355 --rc geninfo_all_blocks=1 00:06:05.355 --rc geninfo_unexecuted_blocks=1 00:06:05.355 00:06:05.355 ' 00:06:05.355 07:39:58 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:05.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.355 --rc genhtml_branch_coverage=1 00:06:05.355 --rc genhtml_function_coverage=1 00:06:05.355 --rc genhtml_legend=1 00:06:05.355 --rc geninfo_all_blocks=1 00:06:05.355 --rc geninfo_unexecuted_blocks=1 00:06:05.355 00:06:05.355 ' 00:06:05.355 07:39:58 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:05.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.355 --rc genhtml_branch_coverage=1 00:06:05.355 --rc genhtml_function_coverage=1 00:06:05.355 --rc genhtml_legend=1 00:06:05.355 --rc geninfo_all_blocks=1 00:06:05.355 --rc geninfo_unexecuted_blocks=1 00:06:05.355 00:06:05.355 ' 00:06:05.355 07:39:58 rpc -- rpc/rpc.sh@65 -- # spdk_pid=592496 00:06:05.355 07:39:58 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:05.355 07:39:58 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.355 07:39:58 rpc -- rpc/rpc.sh@67 -- # waitforlisten 592496 00:06:05.355 07:39:58 rpc -- common/autotest_common.sh@835 -- # '[' -z 592496 ']' 00:06:05.355 07:39:58 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.355 07:39:58 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.355 07:39:58 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.355 07:39:58 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.355 07:39:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.355 [2024-11-18 07:39:58.302732] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:05.355 [2024-11-18 07:39:58.302835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid592496 ] 00:06:05.355 [2024-11-18 07:39:58.371673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.355 [2024-11-18 07:39:58.419455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:05.355 [2024-11-18 07:39:58.419541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 592496' to capture a snapshot of events at runtime. 00:06:05.355 [2024-11-18 07:39:58.419569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:05.355 [2024-11-18 07:39:58.419580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:05.355 [2024-11-18 07:39:58.419590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid592496 for offline analysis/debug. 00:06:05.355 [2024-11-18 07:39:58.420189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.614 07:39:58 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.614 07:39:58 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.614 07:39:58 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:05.614 07:39:58 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:05.614 07:39:58 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:05.614 07:39:58 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:05.614 07:39:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.614 07:39:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.614 07:39:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.614 ************************************ 00:06:05.614 START TEST rpc_integrity 00:06:05.614 ************************************ 00:06:05.614 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:05.872 07:39:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:05.872 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.872 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.872 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.872 07:39:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:05.872 07:39:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:05.872 07:39:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:05.872 07:39:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:05.872 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.872 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.872 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.872 07:39:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:05.872 07:39:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:05.872 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.872 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.872 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.872 07:39:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:05.872 { 00:06:05.872 "name": "Malloc0", 00:06:05.872 "aliases": [ 00:06:05.872 "25da6771-b56c-4b5e-82da-715bd702d61d" 00:06:05.872 ], 00:06:05.872 "product_name": "Malloc disk", 00:06:05.872 "block_size": 512, 00:06:05.872 "num_blocks": 16384, 00:06:05.872 "uuid": "25da6771-b56c-4b5e-82da-715bd702d61d", 00:06:05.872 "assigned_rate_limits": { 00:06:05.872 "rw_ios_per_sec": 0, 00:06:05.872 "rw_mbytes_per_sec": 0, 00:06:05.872 "r_mbytes_per_sec": 0, 00:06:05.872 "w_mbytes_per_sec": 0 00:06:05.872 }, 00:06:05.872 "claimed": false, 00:06:05.872 "zoned": false, 00:06:05.872 "supported_io_types": { 00:06:05.872 "read": true, 00:06:05.872 "write": true, 00:06:05.872 "unmap": true, 00:06:05.872 "flush": true, 00:06:05.872 "reset": true, 00:06:05.872 "nvme_admin": false, 00:06:05.872 "nvme_io": false, 00:06:05.872 "nvme_io_md": false, 00:06:05.872 "write_zeroes": true, 00:06:05.872 "zcopy": true, 00:06:05.872 "get_zone_info": false, 00:06:05.872 "zone_management": false, 00:06:05.872 "zone_append": false, 00:06:05.872 "compare": false, 00:06:05.872 "compare_and_write": false, 00:06:05.872 "abort": true, 00:06:05.872 "seek_hole": false, 00:06:05.872 "seek_data": false, 00:06:05.872 "copy": true, 00:06:05.872 "nvme_iov_md": false 00:06:05.872 }, 00:06:05.872 "memory_domains": [ 00:06:05.872 { 00:06:05.872 "dma_device_id": "system", 00:06:05.872 "dma_device_type": 1 00:06:05.872 }, 00:06:05.872 { 00:06:05.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.873 "dma_device_type": 2 00:06:05.873 } 00:06:05.873 ], 00:06:05.873 "driver_specific": {} 00:06:05.873 } 00:06:05.873 ]' 00:06:05.873 07:39:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:05.873 07:39:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:05.873 07:39:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:05.873 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.873 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.873 [2024-11-18 07:39:58.804397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:05.873 [2024-11-18 07:39:58.804434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:05.873 [2024-11-18 07:39:58.804454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1899b80 00:06:05.873 [2024-11-18 07:39:58.804466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:05.873 [2024-11-18 07:39:58.805820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:05.873 [2024-11-18 07:39:58.805843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:05.873 Passthru0 00:06:05.873 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.873 07:39:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:05.873 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.873 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.873 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.873 07:39:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:05.873 { 00:06:05.873 "name": "Malloc0", 00:06:05.873 "aliases": [ 00:06:05.873 "25da6771-b56c-4b5e-82da-715bd702d61d" 00:06:05.873 ], 00:06:05.873 "product_name": "Malloc disk", 00:06:05.873 "block_size": 512, 00:06:05.873 "num_blocks": 16384, 00:06:05.873 "uuid": "25da6771-b56c-4b5e-82da-715bd702d61d", 00:06:05.873 "assigned_rate_limits": { 00:06:05.873 "rw_ios_per_sec": 0, 00:06:05.873 "rw_mbytes_per_sec": 0, 00:06:05.873 "r_mbytes_per_sec": 0, 00:06:05.873 "w_mbytes_per_sec": 0 00:06:05.873 }, 00:06:05.873 "claimed": true, 00:06:05.873 "claim_type": "exclusive_write", 00:06:05.873 "zoned": false, 00:06:05.873 "supported_io_types": { 00:06:05.873 "read": true, 00:06:05.873 "write": true, 00:06:05.873 "unmap": true, 00:06:05.873 "flush": true, 00:06:05.873 "reset": true, 00:06:05.873 "nvme_admin": false, 00:06:05.873 "nvme_io": false, 00:06:05.873 "nvme_io_md": false, 00:06:05.873 "write_zeroes": true, 00:06:05.873 "zcopy": true, 00:06:05.873 "get_zone_info": false, 00:06:05.873 "zone_management": false, 00:06:05.873 "zone_append": false, 00:06:05.873 "compare": false, 00:06:05.873 "compare_and_write": false, 00:06:05.873 "abort": true, 00:06:05.873 "seek_hole": false, 00:06:05.873 "seek_data": false, 00:06:05.873 "copy": true, 00:06:05.873 "nvme_iov_md": false 00:06:05.873 }, 00:06:05.873 "memory_domains": [ 00:06:05.873 { 00:06:05.873 "dma_device_id": "system", 00:06:05.873 "dma_device_type": 1 00:06:05.873 }, 00:06:05.873 { 00:06:05.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.873 "dma_device_type": 2 00:06:05.873 } 00:06:05.873 ], 00:06:05.873 "driver_specific": {} 00:06:05.873 }, 00:06:05.873 { 00:06:05.873 "name": "Passthru0", 00:06:05.873 "aliases": [ 00:06:05.873 "503e9be5-7dbd-5702-b3b2-dc6bbbfb451a" 00:06:05.873 ], 00:06:05.873 "product_name": "passthru", 00:06:05.873 "block_size": 512, 00:06:05.873 "num_blocks": 16384, 00:06:05.873 "uuid": "503e9be5-7dbd-5702-b3b2-dc6bbbfb451a", 00:06:05.873 "assigned_rate_limits": { 00:06:05.873 "rw_ios_per_sec": 0, 00:06:05.873 "rw_mbytes_per_sec": 0, 00:06:05.873 "r_mbytes_per_sec": 0, 00:06:05.873 "w_mbytes_per_sec": 0 00:06:05.873 }, 00:06:05.873 "claimed": false, 00:06:05.873 "zoned": false, 00:06:05.873 "supported_io_types": { 00:06:05.873 "read": true, 00:06:05.873 "write": true, 00:06:05.873 "unmap": true, 00:06:05.873 "flush": true, 00:06:05.873 "reset": true, 00:06:05.873 "nvme_admin": false, 00:06:05.873 "nvme_io": false, 00:06:05.873 "nvme_io_md": false, 00:06:05.873 "write_zeroes": true, 00:06:05.873 "zcopy": true, 00:06:05.873 "get_zone_info": false, 00:06:05.873 "zone_management": false, 00:06:05.873 "zone_append": false, 00:06:05.873 "compare": false, 00:06:05.873 "compare_and_write": false, 00:06:05.873 "abort": true, 00:06:05.873 "seek_hole": false, 00:06:05.873 "seek_data": false, 00:06:05.873 "copy": true, 00:06:05.873 "nvme_iov_md": false 00:06:05.873 }, 00:06:05.873 "memory_domains": [ 00:06:05.873 { 00:06:05.873 "dma_device_id": "system", 00:06:05.873 "dma_device_type": 1 00:06:05.873 }, 00:06:05.873 { 00:06:05.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.873 "dma_device_type": 2 00:06:05.873 } 00:06:05.873 ], 00:06:05.873 "driver_specific": { 00:06:05.873 "passthru": { 00:06:05.873 "name": "Passthru0", 00:06:05.873 "base_bdev_name": "Malloc0" 00:06:05.873 } 00:06:05.873 } 00:06:05.873 } 00:06:05.873 ]' 00:06:05.873 07:39:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:05.873 07:39:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:05.873 07:39:58 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:05.873 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.873 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.873 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.873 07:39:58 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:05.873 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.873 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.873 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.873 07:39:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:05.873 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.873 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.873 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.873 07:39:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:05.873 07:39:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:05.873 07:39:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:05.873 00:06:05.873 real 0m0.219s 00:06:05.873 user 0m0.139s 00:06:05.873 sys 0m0.024s 00:06:05.873 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.873 07:39:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.873 ************************************ 00:06:05.873 END TEST rpc_integrity 00:06:05.873 ************************************ 00:06:05.873 07:39:58 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:05.873 07:39:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.873 07:39:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.873 07:39:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.132 ************************************ 00:06:06.132 START TEST rpc_plugins 00:06:06.132 ************************************ 00:06:06.132 07:39:58 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:06.132 07:39:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:06.132 07:39:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.132 07:39:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.132 07:39:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.132 07:39:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:06.132 07:39:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:06.132 07:39:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.132 07:39:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.132 07:39:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.132 07:39:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:06.132 { 00:06:06.132 "name": "Malloc1", 00:06:06.132 "aliases": [ 00:06:06.132 "b4fb2604-7db2-4a80-a4cb-15d2d8f0520f" 00:06:06.132 ], 00:06:06.132 "product_name": "Malloc disk", 00:06:06.132 "block_size": 4096, 00:06:06.132 "num_blocks": 256, 00:06:06.132 "uuid": "b4fb2604-7db2-4a80-a4cb-15d2d8f0520f", 00:06:06.132 "assigned_rate_limits": { 00:06:06.132 "rw_ios_per_sec": 0, 00:06:06.132 "rw_mbytes_per_sec": 0, 00:06:06.132 "r_mbytes_per_sec": 0, 00:06:06.132 "w_mbytes_per_sec": 0 00:06:06.132 }, 00:06:06.132 "claimed": false, 00:06:06.132 "zoned": false, 00:06:06.132 "supported_io_types": { 00:06:06.132 "read": true, 00:06:06.132 "write": true, 00:06:06.132 "unmap": true, 00:06:06.132 "flush": true, 00:06:06.132 "reset": true, 00:06:06.132 "nvme_admin": false, 00:06:06.132 "nvme_io": false, 00:06:06.132 "nvme_io_md": false, 00:06:06.132 "write_zeroes": true, 00:06:06.132 "zcopy": true, 00:06:06.132 "get_zone_info": false, 00:06:06.132 "zone_management": false, 00:06:06.132 "zone_append": false, 00:06:06.132 "compare": false, 00:06:06.132 "compare_and_write": false, 00:06:06.132 "abort": true, 00:06:06.132 "seek_hole": false, 00:06:06.132 "seek_data": false, 00:06:06.132 "copy": true, 00:06:06.132 "nvme_iov_md": false 00:06:06.132 }, 00:06:06.132 "memory_domains": [ 00:06:06.132 { 00:06:06.132 "dma_device_id": "system", 00:06:06.132 "dma_device_type": 1 00:06:06.132 }, 00:06:06.132 { 00:06:06.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.132 "dma_device_type": 2 00:06:06.132 } 00:06:06.132 ], 00:06:06.132 "driver_specific": {} 00:06:06.132 } 00:06:06.132 ]' 00:06:06.132 07:39:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:06.132 07:39:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:06.132 07:39:59 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:06.132 07:39:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.132 07:39:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.132 07:39:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.132 07:39:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:06.132 07:39:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.132 07:39:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.132 07:39:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.132 07:39:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:06.132 07:39:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:06.132 07:39:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:06.132 00:06:06.132 real 0m0.107s 00:06:06.132 user 0m0.068s 00:06:06.132 sys 0m0.009s 00:06:06.132 07:39:59 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.132 07:39:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.132 ************************************ 00:06:06.132 END TEST rpc_plugins 00:06:06.132 ************************************ 00:06:06.132 07:39:59 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:06.132 07:39:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.132 07:39:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.132 07:39:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.132 ************************************ 00:06:06.132 START TEST rpc_trace_cmd_test 00:06:06.132 ************************************ 00:06:06.132 07:39:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:06.132 07:39:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:06.132 07:39:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:06.132 07:39:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.132 07:39:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.132 07:39:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.132 07:39:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:06.132 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid592496", 00:06:06.132 "tpoint_group_mask": "0x8", 00:06:06.132 "iscsi_conn": { 00:06:06.132 "mask": "0x2", 00:06:06.133 "tpoint_mask": "0x0" 00:06:06.133 }, 00:06:06.133 "scsi": { 00:06:06.133 "mask": "0x4", 00:06:06.133 "tpoint_mask": "0x0" 00:06:06.133 }, 00:06:06.133 "bdev": { 00:06:06.133 "mask": "0x8", 00:06:06.133 "tpoint_mask": "0xffffffffffffffff" 00:06:06.133 }, 00:06:06.133 "nvmf_rdma": { 00:06:06.133 "mask": "0x10", 00:06:06.133 "tpoint_mask": "0x0" 00:06:06.133 }, 00:06:06.133 "nvmf_tcp": { 00:06:06.133 "mask": "0x20", 00:06:06.133 "tpoint_mask": "0x0" 00:06:06.133 }, 00:06:06.133 "ftl": { 00:06:06.133 "mask": "0x40", 00:06:06.133 "tpoint_mask": "0x0" 00:06:06.133 }, 00:06:06.133 "blobfs": { 00:06:06.133 "mask": "0x80", 00:06:06.133 "tpoint_mask": "0x0" 00:06:06.133 }, 00:06:06.133 "dsa": { 00:06:06.133 "mask": "0x200", 00:06:06.133 "tpoint_mask": "0x0" 00:06:06.133 }, 00:06:06.133 "thread": { 00:06:06.133 "mask": "0x400", 00:06:06.133 "tpoint_mask": "0x0" 00:06:06.133 }, 00:06:06.133 "nvme_pcie": { 00:06:06.133 "mask": "0x800", 00:06:06.133 "tpoint_mask": "0x0" 00:06:06.133 }, 00:06:06.133 "iaa": { 00:06:06.133 "mask": "0x1000", 00:06:06.133 "tpoint_mask": "0x0" 00:06:06.133 }, 00:06:06.133 "nvme_tcp": { 00:06:06.133 "mask": "0x2000", 00:06:06.133 "tpoint_mask": "0x0" 00:06:06.133 }, 00:06:06.133 "bdev_nvme": { 00:06:06.133 "mask": "0x4000", 00:06:06.133 "tpoint_mask": "0x0" 00:06:06.133 }, 00:06:06.133 "sock": { 00:06:06.133 "mask": "0x8000", 00:06:06.133 "tpoint_mask": "0x0" 00:06:06.133 }, 00:06:06.133 "blob": { 00:06:06.133 "mask": "0x10000", 00:06:06.133 "tpoint_mask": "0x0" 00:06:06.133 }, 00:06:06.133 "bdev_raid": { 00:06:06.133 "mask": "0x20000", 00:06:06.133 "tpoint_mask": "0x0" 00:06:06.133 }, 00:06:06.133 "scheduler": { 00:06:06.133 "mask": "0x40000", 00:06:06.133 "tpoint_mask": "0x0" 00:06:06.133 } 00:06:06.133 }' 00:06:06.133 07:39:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:06.133 07:39:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:06.133 07:39:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:06.133 07:39:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:06.133 07:39:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:06.391 07:39:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:06.391 07:39:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:06.391 07:39:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:06.391 07:39:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:06.391 07:39:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:06.391 00:06:06.391 real 0m0.183s 00:06:06.391 user 0m0.165s 00:06:06.391 sys 0m0.011s 00:06:06.391 07:39:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.391 07:39:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.391 ************************************ 00:06:06.391 END TEST rpc_trace_cmd_test 00:06:06.391 ************************************ 00:06:06.391 07:39:59 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:06.391 07:39:59 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:06.391 07:39:59 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:06.391 07:39:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.391 07:39:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.391 07:39:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.391 ************************************ 00:06:06.391 START TEST rpc_daemon_integrity 00:06:06.391 ************************************ 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:06.391 { 00:06:06.391 "name": "Malloc2", 00:06:06.391 "aliases": [ 00:06:06.391 "3b223d2d-2aec-45d6-97f5-eaf8b9c0c1f1" 00:06:06.391 ], 00:06:06.391 "product_name": "Malloc disk", 00:06:06.391 "block_size": 512, 00:06:06.391 "num_blocks": 16384, 00:06:06.391 "uuid": "3b223d2d-2aec-45d6-97f5-eaf8b9c0c1f1", 00:06:06.391 "assigned_rate_limits": { 00:06:06.391 "rw_ios_per_sec": 0, 00:06:06.391 "rw_mbytes_per_sec": 0, 00:06:06.391 "r_mbytes_per_sec": 0, 00:06:06.391 "w_mbytes_per_sec": 0 00:06:06.391 }, 00:06:06.391 "claimed": false, 00:06:06.391 "zoned": false, 00:06:06.391 "supported_io_types": { 00:06:06.391 "read": true, 00:06:06.391 "write": true, 00:06:06.391 "unmap": true, 00:06:06.391 "flush": true, 00:06:06.391 "reset": true, 00:06:06.391 "nvme_admin": false, 00:06:06.391 "nvme_io": false, 00:06:06.391 "nvme_io_md": false, 00:06:06.391 "write_zeroes": true, 00:06:06.391 "zcopy": true, 00:06:06.391 "get_zone_info": false, 00:06:06.391 "zone_management": false, 00:06:06.391 "zone_append": false, 00:06:06.391 "compare": false, 00:06:06.391 "compare_and_write": false, 00:06:06.391 "abort": true, 00:06:06.391 "seek_hole": false, 00:06:06.391 "seek_data": false, 00:06:06.391 "copy": true, 00:06:06.391 "nvme_iov_md": false 00:06:06.391 }, 00:06:06.391 "memory_domains": [ 00:06:06.391 { 00:06:06.391 "dma_device_id": "system", 00:06:06.391 "dma_device_type": 1 00:06:06.391 }, 00:06:06.391 { 00:06:06.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.391 "dma_device_type": 2 00:06:06.391 } 00:06:06.391 ], 00:06:06.391 "driver_specific": {} 00:06:06.391 } 00:06:06.391 ]' 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.391 [2024-11-18 07:39:59.458267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:06.391 [2024-11-18 07:39:59.458304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:06.391 [2024-11-18 07:39:59.458328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x189d390 00:06:06.391 [2024-11-18 07:39:59.458342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:06.391 [2024-11-18 07:39:59.459526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:06.391 [2024-11-18 07:39:59.459566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:06.391 Passthru0 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.391 07:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:06.391 { 00:06:06.391 "name": "Malloc2", 00:06:06.391 "aliases": [ 00:06:06.391 "3b223d2d-2aec-45d6-97f5-eaf8b9c0c1f1" 00:06:06.391 ], 00:06:06.391 "product_name": "Malloc disk", 00:06:06.391 "block_size": 512, 00:06:06.391 "num_blocks": 16384, 00:06:06.391 "uuid": "3b223d2d-2aec-45d6-97f5-eaf8b9c0c1f1", 00:06:06.391 "assigned_rate_limits": { 00:06:06.392 "rw_ios_per_sec": 0, 00:06:06.392 "rw_mbytes_per_sec": 0, 00:06:06.392 "r_mbytes_per_sec": 0, 00:06:06.392 "w_mbytes_per_sec": 0 00:06:06.392 }, 00:06:06.392 "claimed": true, 00:06:06.392 "claim_type": "exclusive_write", 00:06:06.392 "zoned": false, 00:06:06.392 "supported_io_types": { 00:06:06.392 "read": true, 00:06:06.392 "write": true, 00:06:06.392 "unmap": true, 00:06:06.392 "flush": true, 00:06:06.392 "reset": true, 00:06:06.392 "nvme_admin": false, 00:06:06.392 "nvme_io": false, 00:06:06.392 "nvme_io_md": false, 00:06:06.392 "write_zeroes": true, 00:06:06.392 "zcopy": true, 00:06:06.392 "get_zone_info": false, 00:06:06.392 "zone_management": false, 00:06:06.392 "zone_append": false, 00:06:06.392 "compare": false, 00:06:06.392 "compare_and_write": false, 00:06:06.392 "abort": true, 00:06:06.392 "seek_hole": false, 00:06:06.392 "seek_data": false, 00:06:06.392 "copy": true, 00:06:06.392 "nvme_iov_md": false 00:06:06.392 }, 00:06:06.392 "memory_domains": [ 00:06:06.392 { 00:06:06.392 "dma_device_id": "system", 00:06:06.392 "dma_device_type": 1 00:06:06.392 }, 00:06:06.392 { 00:06:06.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.392 "dma_device_type": 2 00:06:06.392 } 00:06:06.392 ], 00:06:06.392 "driver_specific": {} 00:06:06.392 }, 00:06:06.392 { 00:06:06.392 "name": "Passthru0", 00:06:06.392 "aliases": [ 00:06:06.392 "fa9654b4-3e81-5835-ba96-e0478ea63c16" 00:06:06.392 ], 00:06:06.392 "product_name": "passthru", 00:06:06.392 "block_size": 512, 00:06:06.392 "num_blocks": 16384, 00:06:06.392 "uuid": "fa9654b4-3e81-5835-ba96-e0478ea63c16", 00:06:06.392 "assigned_rate_limits": { 00:06:06.392 "rw_ios_per_sec": 0, 00:06:06.392 "rw_mbytes_per_sec": 0, 00:06:06.392 "r_mbytes_per_sec": 0, 00:06:06.392 "w_mbytes_per_sec": 0 00:06:06.392 }, 00:06:06.392 "claimed": false, 00:06:06.392 "zoned": false, 00:06:06.392 "supported_io_types": { 00:06:06.392 "read": true, 00:06:06.392 "write": true, 00:06:06.392 "unmap": true, 00:06:06.392 "flush": true, 00:06:06.392 "reset": true, 00:06:06.392 "nvme_admin": false, 00:06:06.392 "nvme_io": false, 00:06:06.392 "nvme_io_md": false, 00:06:06.392 "write_zeroes": true, 00:06:06.392 "zcopy": true, 00:06:06.392 "get_zone_info": false, 00:06:06.392 "zone_management": false, 00:06:06.392 "zone_append": false, 00:06:06.392 "compare": false, 00:06:06.392 "compare_and_write": false, 00:06:06.392 "abort": true, 00:06:06.392 "seek_hole": false, 00:06:06.392 "seek_data": false, 00:06:06.392 "copy": true, 00:06:06.392 "nvme_iov_md": false 00:06:06.392 }, 00:06:06.392 "memory_domains": [ 00:06:06.392 { 00:06:06.392 "dma_device_id": "system", 00:06:06.392 "dma_device_type": 1 00:06:06.392 }, 00:06:06.392 { 00:06:06.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.392 "dma_device_type": 2 00:06:06.392 } 00:06:06.392 ], 00:06:06.392 "driver_specific": { 00:06:06.392 "passthru": { 00:06:06.392 "name": "Passthru0", 00:06:06.392 "base_bdev_name": "Malloc2" 00:06:06.392 } 00:06:06.392 } 00:06:06.392 } 00:06:06.392 ]' 00:06:06.392 07:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:06.650 07:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:06.650 07:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:06.650 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.650 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.650 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.650 07:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:06.650 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.650 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.650 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.650 07:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:06.650 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.650 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.650 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.650 07:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:06.650 07:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:06.650 07:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:06.650 00:06:06.650 real 0m0.224s 00:06:06.650 user 0m0.147s 00:06:06.650 sys 0m0.021s 00:06:06.650 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.650 07:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.650 ************************************ 00:06:06.650 END TEST rpc_daemon_integrity 00:06:06.650 ************************************ 00:06:06.650 07:39:59 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:06.650 07:39:59 rpc -- rpc/rpc.sh@84 -- # killprocess 592496 00:06:06.650 07:39:59 rpc -- common/autotest_common.sh@954 -- # '[' -z 592496 ']' 00:06:06.650 07:39:59 rpc -- common/autotest_common.sh@958 -- # kill -0 592496 00:06:06.650 07:39:59 rpc -- common/autotest_common.sh@959 -- # uname 00:06:06.650 07:39:59 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.650 07:39:59 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 592496 00:06:06.650 07:39:59 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.650 07:39:59 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.650 07:39:59 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 592496' 00:06:06.650 killing process with pid 592496 00:06:06.650 07:39:59 rpc -- common/autotest_common.sh@973 -- # kill 592496 00:06:06.650 07:39:59 rpc -- common/autotest_common.sh@978 -- # wait 592496 00:06:07.217 00:06:07.217 real 0m1.903s 00:06:07.217 user 0m2.376s 00:06:07.217 sys 0m0.604s 00:06:07.217 07:40:00 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.217 07:40:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.217 ************************************ 00:06:07.217 END TEST rpc 00:06:07.217 ************************************ 00:06:07.217 07:40:00 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:07.217 07:40:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.217 07:40:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.217 07:40:00 -- common/autotest_common.sh@10 -- # set +x 00:06:07.217 ************************************ 00:06:07.217 START TEST skip_rpc 00:06:07.217 ************************************ 00:06:07.217 07:40:00 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:07.217 * Looking for test storage... 00:06:07.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:07.217 07:40:00 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.217 07:40:00 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.217 07:40:00 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.217 07:40:00 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.217 07:40:00 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:07.217 07:40:00 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.217 07:40:00 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.217 --rc genhtml_branch_coverage=1 00:06:07.217 --rc genhtml_function_coverage=1 00:06:07.217 --rc genhtml_legend=1 00:06:07.217 --rc geninfo_all_blocks=1 00:06:07.217 --rc geninfo_unexecuted_blocks=1 00:06:07.217 00:06:07.217 ' 00:06:07.217 07:40:00 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.217 --rc genhtml_branch_coverage=1 00:06:07.217 --rc genhtml_function_coverage=1 00:06:07.217 --rc genhtml_legend=1 00:06:07.217 --rc geninfo_all_blocks=1 00:06:07.217 --rc geninfo_unexecuted_blocks=1 00:06:07.217 00:06:07.217 ' 00:06:07.217 07:40:00 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.217 --rc genhtml_branch_coverage=1 00:06:07.217 --rc genhtml_function_coverage=1 00:06:07.217 --rc genhtml_legend=1 00:06:07.217 --rc geninfo_all_blocks=1 00:06:07.217 --rc geninfo_unexecuted_blocks=1 00:06:07.217 00:06:07.217 ' 00:06:07.217 07:40:00 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.217 --rc genhtml_branch_coverage=1 00:06:07.217 --rc genhtml_function_coverage=1 00:06:07.217 --rc genhtml_legend=1 00:06:07.217 --rc geninfo_all_blocks=1 00:06:07.217 --rc geninfo_unexecuted_blocks=1 00:06:07.217 00:06:07.217 ' 00:06:07.217 07:40:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:07.217 07:40:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:07.217 07:40:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:07.218 07:40:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.218 07:40:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.218 07:40:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.218 ************************************ 00:06:07.218 START TEST skip_rpc 00:06:07.218 ************************************ 00:06:07.218 07:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:07.218 07:40:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=592943 00:06:07.218 07:40:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:07.218 07:40:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.218 07:40:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:07.218 [2024-11-18 07:40:00.285223] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:07.218 [2024-11-18 07:40:00.285291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid592943 ] 00:06:07.476 [2024-11-18 07:40:00.352417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.476 [2024-11-18 07:40:00.399630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 592943 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 592943 ']' 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 592943 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 592943 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 592943' 00:06:12.739 killing process with pid 592943 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 592943 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 592943 00:06:12.739 00:06:12.739 real 0m5.411s 00:06:12.739 user 0m5.121s 00:06:12.739 sys 0m0.301s 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.739 07:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.739 ************************************ 00:06:12.739 END TEST skip_rpc 00:06:12.739 ************************************ 00:06:12.739 07:40:05 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:12.739 07:40:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.739 07:40:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.739 07:40:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.739 ************************************ 00:06:12.739 START TEST skip_rpc_with_json 00:06:12.739 ************************************ 00:06:12.739 07:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:12.739 07:40:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:12.739 07:40:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=593636 00:06:12.739 07:40:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.739 07:40:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.739 07:40:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 593636 00:06:12.739 07:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 593636 ']' 00:06:12.739 07:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.739 07:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.739 07:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.739 07:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.739 07:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.739 [2024-11-18 07:40:05.744000] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:12.739 [2024-11-18 07:40:05.744106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid593636 ] 00:06:12.739 [2024-11-18 07:40:05.809402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.997 [2024-11-18 07:40:05.853155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.256 07:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.256 07:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:13.256 07:40:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:13.256 07:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.256 07:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.256 [2024-11-18 07:40:06.097424] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:13.256 request: 00:06:13.256 { 00:06:13.256 "trtype": "tcp", 00:06:13.256 "method": "nvmf_get_transports", 00:06:13.256 "req_id": 1 00:06:13.256 } 00:06:13.256 Got JSON-RPC error response 00:06:13.256 response: 00:06:13.256 { 00:06:13.256 "code": -19, 00:06:13.256 "message": "No such device" 00:06:13.256 } 00:06:13.256 07:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:13.256 07:40:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:13.256 07:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.256 07:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.256 [2024-11-18 07:40:06.105556] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.256 07:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.256 07:40:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:13.256 07:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.256 07:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.256 07:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.256 07:40:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:13.256 { 00:06:13.256 "subsystems": [ 00:06:13.256 { 00:06:13.256 "subsystem": "fsdev", 00:06:13.256 "config": [ 00:06:13.256 { 00:06:13.256 "method": "fsdev_set_opts", 00:06:13.256 "params": { 00:06:13.256 "fsdev_io_pool_size": 65535, 00:06:13.256 "fsdev_io_cache_size": 256 00:06:13.256 } 00:06:13.256 } 00:06:13.256 ] 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "subsystem": "vfio_user_target", 00:06:13.256 "config": null 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "subsystem": "keyring", 00:06:13.256 "config": [] 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "subsystem": "iobuf", 00:06:13.256 "config": [ 00:06:13.256 { 00:06:13.256 "method": "iobuf_set_options", 00:06:13.256 "params": { 00:06:13.256 "small_pool_count": 8192, 00:06:13.256 "large_pool_count": 1024, 00:06:13.256 "small_bufsize": 8192, 00:06:13.256 "large_bufsize": 135168, 00:06:13.256 "enable_numa": false 00:06:13.256 } 00:06:13.256 } 00:06:13.256 ] 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "subsystem": "sock", 00:06:13.256 "config": [ 00:06:13.256 { 00:06:13.256 "method": "sock_set_default_impl", 00:06:13.256 "params": { 00:06:13.256 "impl_name": "posix" 00:06:13.256 } 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "method": "sock_impl_set_options", 00:06:13.256 "params": { 00:06:13.256 "impl_name": "ssl", 00:06:13.256 "recv_buf_size": 4096, 00:06:13.256 "send_buf_size": 4096, 00:06:13.256 "enable_recv_pipe": true, 00:06:13.256 "enable_quickack": false, 00:06:13.256 "enable_placement_id": 0, 00:06:13.256 "enable_zerocopy_send_server": true, 00:06:13.256 "enable_zerocopy_send_client": false, 00:06:13.256 "zerocopy_threshold": 0, 00:06:13.256 "tls_version": 0, 00:06:13.256 "enable_ktls": false 00:06:13.256 } 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "method": "sock_impl_set_options", 00:06:13.256 "params": { 00:06:13.256 "impl_name": "posix", 00:06:13.256 "recv_buf_size": 2097152, 00:06:13.256 "send_buf_size": 2097152, 00:06:13.256 "enable_recv_pipe": true, 00:06:13.256 "enable_quickack": false, 00:06:13.256 "enable_placement_id": 0, 00:06:13.256 "enable_zerocopy_send_server": true, 00:06:13.256 "enable_zerocopy_send_client": false, 00:06:13.256 "zerocopy_threshold": 0, 00:06:13.256 "tls_version": 0, 00:06:13.256 "enable_ktls": false 00:06:13.256 } 00:06:13.256 } 00:06:13.256 ] 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "subsystem": "vmd", 00:06:13.256 "config": [] 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "subsystem": "accel", 00:06:13.256 "config": [ 00:06:13.256 { 00:06:13.256 "method": "accel_set_options", 00:06:13.256 "params": { 00:06:13.256 "small_cache_size": 128, 00:06:13.256 "large_cache_size": 16, 00:06:13.256 "task_count": 2048, 00:06:13.256 "sequence_count": 2048, 00:06:13.256 "buf_count": 2048 00:06:13.256 } 00:06:13.256 } 00:06:13.256 ] 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "subsystem": "bdev", 00:06:13.256 "config": [ 00:06:13.256 { 00:06:13.256 "method": "bdev_set_options", 00:06:13.256 "params": { 00:06:13.256 "bdev_io_pool_size": 65535, 00:06:13.256 "bdev_io_cache_size": 256, 00:06:13.256 "bdev_auto_examine": true, 00:06:13.256 "iobuf_small_cache_size": 128, 00:06:13.256 "iobuf_large_cache_size": 16 00:06:13.256 } 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "method": "bdev_raid_set_options", 00:06:13.256 "params": { 00:06:13.256 "process_window_size_kb": 1024, 00:06:13.256 "process_max_bandwidth_mb_sec": 0 00:06:13.256 } 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "method": "bdev_iscsi_set_options", 00:06:13.256 "params": { 00:06:13.256 "timeout_sec": 30 00:06:13.256 } 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "method": "bdev_nvme_set_options", 00:06:13.256 "params": { 00:06:13.256 "action_on_timeout": "none", 00:06:13.256 "timeout_us": 0, 00:06:13.256 "timeout_admin_us": 0, 00:06:13.256 "keep_alive_timeout_ms": 10000, 00:06:13.256 "arbitration_burst": 0, 00:06:13.256 "low_priority_weight": 0, 00:06:13.256 "medium_priority_weight": 0, 00:06:13.256 "high_priority_weight": 0, 00:06:13.256 "nvme_adminq_poll_period_us": 10000, 00:06:13.256 "nvme_ioq_poll_period_us": 0, 00:06:13.256 "io_queue_requests": 0, 00:06:13.256 "delay_cmd_submit": true, 00:06:13.256 "transport_retry_count": 4, 00:06:13.256 "bdev_retry_count": 3, 00:06:13.256 "transport_ack_timeout": 0, 00:06:13.256 "ctrlr_loss_timeout_sec": 0, 00:06:13.256 "reconnect_delay_sec": 0, 00:06:13.256 "fast_io_fail_timeout_sec": 0, 00:06:13.256 "disable_auto_failback": false, 00:06:13.256 "generate_uuids": false, 00:06:13.256 "transport_tos": 0, 00:06:13.256 "nvme_error_stat": false, 00:06:13.256 "rdma_srq_size": 0, 00:06:13.256 "io_path_stat": false, 00:06:13.256 "allow_accel_sequence": false, 00:06:13.256 "rdma_max_cq_size": 0, 00:06:13.256 "rdma_cm_event_timeout_ms": 0, 00:06:13.256 "dhchap_digests": [ 00:06:13.256 "sha256", 00:06:13.256 "sha384", 00:06:13.256 "sha512" 00:06:13.256 ], 00:06:13.256 "dhchap_dhgroups": [ 00:06:13.256 "null", 00:06:13.256 "ffdhe2048", 00:06:13.256 "ffdhe3072", 00:06:13.256 "ffdhe4096", 00:06:13.256 "ffdhe6144", 00:06:13.256 "ffdhe8192" 00:06:13.256 ] 00:06:13.256 } 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "method": "bdev_nvme_set_hotplug", 00:06:13.256 "params": { 00:06:13.256 "period_us": 100000, 00:06:13.256 "enable": false 00:06:13.256 } 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "method": "bdev_wait_for_examine" 00:06:13.256 } 00:06:13.256 ] 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "subsystem": "scsi", 00:06:13.256 "config": null 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "subsystem": "scheduler", 00:06:13.256 "config": [ 00:06:13.256 { 00:06:13.256 "method": "framework_set_scheduler", 00:06:13.256 "params": { 00:06:13.256 "name": "static" 00:06:13.256 } 00:06:13.256 } 00:06:13.256 ] 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "subsystem": "vhost_scsi", 00:06:13.256 "config": [] 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "subsystem": "vhost_blk", 00:06:13.256 "config": [] 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "subsystem": "ublk", 00:06:13.256 "config": [] 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "subsystem": "nbd", 00:06:13.256 "config": [] 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "subsystem": "nvmf", 00:06:13.256 "config": [ 00:06:13.256 { 00:06:13.256 "method": "nvmf_set_config", 00:06:13.256 "params": { 00:06:13.257 "discovery_filter": "match_any", 00:06:13.257 "admin_cmd_passthru": { 00:06:13.257 "identify_ctrlr": false 00:06:13.257 }, 00:06:13.257 "dhchap_digests": [ 00:06:13.257 "sha256", 00:06:13.257 "sha384", 00:06:13.257 "sha512" 00:06:13.257 ], 00:06:13.257 "dhchap_dhgroups": [ 00:06:13.257 "null", 00:06:13.257 "ffdhe2048", 00:06:13.257 "ffdhe3072", 00:06:13.257 "ffdhe4096", 00:06:13.257 "ffdhe6144", 00:06:13.257 "ffdhe8192" 00:06:13.257 ] 00:06:13.257 } 00:06:13.257 }, 00:06:13.257 { 00:06:13.257 "method": "nvmf_set_max_subsystems", 00:06:13.257 "params": { 00:06:13.257 "max_subsystems": 1024 00:06:13.257 } 00:06:13.257 }, 00:06:13.257 { 00:06:13.257 "method": "nvmf_set_crdt", 00:06:13.257 "params": { 00:06:13.257 "crdt1": 0, 00:06:13.257 "crdt2": 0, 00:06:13.257 "crdt3": 0 00:06:13.257 } 00:06:13.257 }, 00:06:13.257 { 00:06:13.257 "method": "nvmf_create_transport", 00:06:13.257 "params": { 00:06:13.257 "trtype": "TCP", 00:06:13.257 "max_queue_depth": 128, 00:06:13.257 "max_io_qpairs_per_ctrlr": 127, 00:06:13.257 "in_capsule_data_size": 4096, 00:06:13.257 "max_io_size": 131072, 00:06:13.257 "io_unit_size": 131072, 00:06:13.257 "max_aq_depth": 128, 00:06:13.257 "num_shared_buffers": 511, 00:06:13.257 "buf_cache_size": 4294967295, 00:06:13.257 "dif_insert_or_strip": false, 00:06:13.257 "zcopy": false, 00:06:13.257 "c2h_success": true, 00:06:13.257 "sock_priority": 0, 00:06:13.257 "abort_timeout_sec": 1, 00:06:13.257 "ack_timeout": 0, 00:06:13.257 "data_wr_pool_size": 0 00:06:13.257 } 00:06:13.257 } 00:06:13.257 ] 00:06:13.257 }, 00:06:13.257 { 00:06:13.257 "subsystem": "iscsi", 00:06:13.257 "config": [ 00:06:13.257 { 00:06:13.257 "method": "iscsi_set_options", 00:06:13.257 "params": { 00:06:13.257 "node_base": "iqn.2016-06.io.spdk", 00:06:13.257 "max_sessions": 128, 00:06:13.257 "max_connections_per_session": 2, 00:06:13.257 "max_queue_depth": 64, 00:06:13.257 "default_time2wait": 2, 00:06:13.257 "default_time2retain": 20, 00:06:13.257 "first_burst_length": 8192, 00:06:13.257 "immediate_data": true, 00:06:13.257 "allow_duplicated_isid": false, 00:06:13.257 "error_recovery_level": 0, 00:06:13.257 "nop_timeout": 60, 00:06:13.257 "nop_in_interval": 30, 00:06:13.257 "disable_chap": false, 00:06:13.257 "require_chap": false, 00:06:13.257 "mutual_chap": false, 00:06:13.257 "chap_group": 0, 00:06:13.257 "max_large_datain_per_connection": 64, 00:06:13.257 "max_r2t_per_connection": 4, 00:06:13.257 "pdu_pool_size": 36864, 00:06:13.257 "immediate_data_pool_size": 16384, 00:06:13.257 "data_out_pool_size": 2048 00:06:13.257 } 00:06:13.257 } 00:06:13.257 ] 00:06:13.257 } 00:06:13.257 ] 00:06:13.257 } 00:06:13.257 07:40:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:13.257 07:40:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 593636 00:06:13.257 07:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 593636 ']' 00:06:13.257 07:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 593636 00:06:13.257 07:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:13.257 07:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.257 07:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 593636 00:06:13.257 07:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.257 07:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.257 07:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 593636' 00:06:13.257 killing process with pid 593636 00:06:13.257 07:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 593636 00:06:13.257 07:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 593636 00:06:13.823 07:40:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=593776 00:06:13.823 07:40:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:13.823 07:40:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:19.097 07:40:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 593776 00:06:19.097 07:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 593776 ']' 00:06:19.097 07:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 593776 00:06:19.097 07:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:19.097 07:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.097 07:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 593776 00:06:19.097 07:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.097 07:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.097 07:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 593776' 00:06:19.097 killing process with pid 593776 00:06:19.097 07:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 593776 00:06:19.097 07:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 593776 00:06:19.097 07:40:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:19.097 07:40:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:19.097 00:06:19.097 real 0m6.416s 00:06:19.097 user 0m6.098s 00:06:19.097 sys 0m0.632s 00:06:19.097 07:40:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.097 07:40:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:19.097 ************************************ 00:06:19.097 END TEST skip_rpc_with_json 00:06:19.097 ************************************ 00:06:19.097 07:40:12 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:19.097 07:40:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.097 07:40:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.097 07:40:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.097 ************************************ 00:06:19.097 START TEST skip_rpc_with_delay 00:06:19.097 ************************************ 00:06:19.097 07:40:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:19.097 07:40:12 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.097 07:40:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:19.097 07:40:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.097 07:40:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.097 07:40:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.097 07:40:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.097 07:40:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.097 07:40:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.097 07:40:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.097 07:40:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.097 07:40:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:19.097 07:40:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.356 [2024-11-18 07:40:12.216757] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:19.356 07:40:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:19.356 07:40:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.356 07:40:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:19.356 07:40:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.356 00:06:19.356 real 0m0.075s 00:06:19.356 user 0m0.047s 00:06:19.356 sys 0m0.028s 00:06:19.356 07:40:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.356 07:40:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:19.356 ************************************ 00:06:19.356 END TEST skip_rpc_with_delay 00:06:19.356 ************************************ 00:06:19.356 07:40:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:19.356 07:40:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:19.356 07:40:12 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:19.356 07:40:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.356 07:40:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.356 07:40:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.356 ************************************ 00:06:19.356 START TEST exit_on_failed_rpc_init 00:06:19.356 ************************************ 00:06:19.356 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:19.356 07:40:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=594487 00:06:19.356 07:40:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.356 07:40:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 594487 00:06:19.356 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 594487 ']' 00:06:19.356 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.356 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.356 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.356 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.356 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:19.356 [2024-11-18 07:40:12.343706] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:19.356 [2024-11-18 07:40:12.343809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid594487 ] 00:06:19.356 [2024-11-18 07:40:12.410498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.614 [2024-11-18 07:40:12.461813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.873 [2024-11-18 07:40:12.772814] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:19.873 [2024-11-18 07:40:12.772896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid594504 ] 00:06:19.873 [2024-11-18 07:40:12.839563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.873 [2024-11-18 07:40:12.887690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.873 [2024-11-18 07:40:12.887804] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:19.873 [2024-11-18 07:40:12.887824] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:19.873 [2024-11-18 07:40:12.887836] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 594487 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 594487 ']' 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 594487 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.873 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 594487 00:06:20.131 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.131 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.131 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 594487' 00:06:20.131 killing process with pid 594487 00:06:20.131 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 594487 00:06:20.131 07:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 594487 00:06:20.395 00:06:20.396 real 0m1.066s 00:06:20.396 user 0m1.147s 00:06:20.396 sys 0m0.437s 00:06:20.396 07:40:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.396 07:40:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:20.396 ************************************ 00:06:20.396 END TEST exit_on_failed_rpc_init 00:06:20.396 ************************************ 00:06:20.396 07:40:13 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:20.396 00:06:20.396 real 0m13.323s 00:06:20.396 user 0m12.606s 00:06:20.396 sys 0m1.579s 00:06:20.396 07:40:13 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.396 07:40:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.396 ************************************ 00:06:20.396 END TEST skip_rpc 00:06:20.396 ************************************ 00:06:20.396 07:40:13 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:20.396 07:40:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.396 07:40:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.396 07:40:13 -- common/autotest_common.sh@10 -- # set +x 00:06:20.396 ************************************ 00:06:20.396 START TEST rpc_client 00:06:20.396 ************************************ 00:06:20.396 07:40:13 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:20.689 * Looking for test storage... 00:06:20.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:20.689 07:40:13 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:20.689 07:40:13 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:20.689 07:40:13 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:20.689 07:40:13 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:20.689 07:40:13 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.689 07:40:13 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.689 07:40:13 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.689 07:40:13 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.689 07:40:13 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.689 07:40:13 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.689 07:40:13 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.689 07:40:13 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.689 07:40:13 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.689 07:40:13 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.689 07:40:13 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.689 07:40:13 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:20.689 07:40:13 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:20.689 07:40:13 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.689 07:40:13 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.690 07:40:13 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:20.690 07:40:13 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:20.690 07:40:13 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.690 07:40:13 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:20.690 07:40:13 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.690 07:40:13 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:20.690 07:40:13 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:20.690 07:40:13 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.690 07:40:13 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:20.690 07:40:13 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.690 07:40:13 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.690 07:40:13 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.690 07:40:13 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:20.690 07:40:13 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.690 07:40:13 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:20.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.690 --rc genhtml_branch_coverage=1 00:06:20.690 --rc genhtml_function_coverage=1 00:06:20.690 --rc genhtml_legend=1 00:06:20.690 --rc geninfo_all_blocks=1 00:06:20.690 --rc geninfo_unexecuted_blocks=1 00:06:20.690 00:06:20.690 ' 00:06:20.690 07:40:13 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:20.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.690 --rc genhtml_branch_coverage=1 00:06:20.690 --rc genhtml_function_coverage=1 00:06:20.690 --rc genhtml_legend=1 00:06:20.690 --rc geninfo_all_blocks=1 00:06:20.690 --rc geninfo_unexecuted_blocks=1 00:06:20.690 00:06:20.690 ' 00:06:20.690 07:40:13 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:20.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.690 --rc genhtml_branch_coverage=1 00:06:20.690 --rc genhtml_function_coverage=1 00:06:20.690 --rc genhtml_legend=1 00:06:20.690 --rc geninfo_all_blocks=1 00:06:20.690 --rc geninfo_unexecuted_blocks=1 00:06:20.690 00:06:20.690 ' 00:06:20.690 07:40:13 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:20.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.690 --rc genhtml_branch_coverage=1 00:06:20.690 --rc genhtml_function_coverage=1 00:06:20.690 --rc genhtml_legend=1 00:06:20.690 --rc geninfo_all_blocks=1 00:06:20.690 --rc geninfo_unexecuted_blocks=1 00:06:20.690 00:06:20.690 ' 00:06:20.690 07:40:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:20.690 OK 00:06:20.690 07:40:13 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:20.690 00:06:20.690 real 0m0.164s 00:06:20.690 user 0m0.109s 00:06:20.690 sys 0m0.063s 00:06:20.690 07:40:13 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.690 07:40:13 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:20.690 ************************************ 00:06:20.690 END TEST rpc_client 00:06:20.690 ************************************ 00:06:20.690 07:40:13 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:20.690 07:40:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.690 07:40:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.690 07:40:13 -- common/autotest_common.sh@10 -- # set +x 00:06:20.690 ************************************ 00:06:20.690 START TEST json_config 00:06:20.690 ************************************ 00:06:20.690 07:40:13 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:20.690 07:40:13 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:20.690 07:40:13 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:20.690 07:40:13 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:20.690 07:40:13 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:20.690 07:40:13 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.690 07:40:13 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.690 07:40:13 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.690 07:40:13 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.690 07:40:13 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.690 07:40:13 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.690 07:40:13 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.690 07:40:13 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.690 07:40:13 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.690 07:40:13 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.690 07:40:13 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.690 07:40:13 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:20.690 07:40:13 json_config -- scripts/common.sh@345 -- # : 1 00:06:20.690 07:40:13 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.690 07:40:13 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.690 07:40:13 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:20.690 07:40:13 json_config -- scripts/common.sh@353 -- # local d=1 00:06:20.690 07:40:13 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.690 07:40:13 json_config -- scripts/common.sh@355 -- # echo 1 00:06:20.690 07:40:13 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.690 07:40:13 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:20.690 07:40:13 json_config -- scripts/common.sh@353 -- # local d=2 00:06:20.690 07:40:13 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.690 07:40:13 json_config -- scripts/common.sh@355 -- # echo 2 00:06:20.690 07:40:13 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.690 07:40:13 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.690 07:40:13 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.690 07:40:13 json_config -- scripts/common.sh@368 -- # return 0 00:06:20.690 07:40:13 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.690 07:40:13 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:20.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.690 --rc genhtml_branch_coverage=1 00:06:20.690 --rc genhtml_function_coverage=1 00:06:20.690 --rc genhtml_legend=1 00:06:20.690 --rc geninfo_all_blocks=1 00:06:20.690 --rc geninfo_unexecuted_blocks=1 00:06:20.690 00:06:20.690 ' 00:06:20.690 07:40:13 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:20.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.690 --rc genhtml_branch_coverage=1 00:06:20.690 --rc genhtml_function_coverage=1 00:06:20.690 --rc genhtml_legend=1 00:06:20.690 --rc geninfo_all_blocks=1 00:06:20.690 --rc geninfo_unexecuted_blocks=1 00:06:20.690 00:06:20.690 ' 00:06:20.690 07:40:13 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:20.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.690 --rc genhtml_branch_coverage=1 00:06:20.690 --rc genhtml_function_coverage=1 00:06:20.690 --rc genhtml_legend=1 00:06:20.690 --rc geninfo_all_blocks=1 00:06:20.690 --rc geninfo_unexecuted_blocks=1 00:06:20.690 00:06:20.690 ' 00:06:20.690 07:40:13 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:20.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.690 --rc genhtml_branch_coverage=1 00:06:20.690 --rc genhtml_function_coverage=1 00:06:20.690 --rc genhtml_legend=1 00:06:20.690 --rc geninfo_all_blocks=1 00:06:20.690 --rc geninfo_unexecuted_blocks=1 00:06:20.690 00:06:20.690 ' 00:06:20.690 07:40:13 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.690 07:40:13 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:20.690 07:40:13 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.690 07:40:13 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.690 07:40:13 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.690 07:40:13 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.690 07:40:13 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.690 07:40:13 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.690 07:40:13 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.691 07:40:13 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.691 07:40:13 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.691 07:40:13 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.691 07:40:13 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:20.691 07:40:13 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:20.691 07:40:13 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.691 07:40:13 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.691 07:40:13 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:20.691 07:40:13 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.973 07:40:13 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.973 07:40:13 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.973 07:40:13 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.973 07:40:13 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.973 07:40:13 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.973 07:40:13 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.973 07:40:13 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.973 07:40:13 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.973 07:40:13 json_config -- paths/export.sh@5 -- # export PATH 00:06:20.974 07:40:13 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.974 07:40:13 json_config -- nvmf/common.sh@51 -- # : 0 00:06:20.974 07:40:13 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.974 07:40:13 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.974 07:40:13 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.974 07:40:13 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.974 07:40:13 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.974 07:40:13 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.974 07:40:13 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.974 07:40:13 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.974 07:40:13 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.974 07:40:13 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:20.974 07:40:13 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:20.974 07:40:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:20.974 07:40:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:20.974 07:40:13 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:20.974 07:40:13 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:20.974 07:40:13 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:20.974 07:40:13 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:20.974 07:40:13 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:20.974 07:40:13 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:20.974 07:40:13 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:20.974 07:40:13 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:20.974 07:40:13 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:20.974 07:40:13 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:20.974 07:40:13 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:20.974 07:40:13 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:20.974 INFO: JSON configuration test init 00:06:20.974 07:40:13 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:20.974 07:40:13 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:20.974 07:40:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:20.974 07:40:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.974 07:40:13 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:20.974 07:40:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:20.974 07:40:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.974 07:40:13 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:20.974 07:40:13 json_config -- json_config/common.sh@9 -- # local app=target 00:06:20.974 07:40:13 json_config -- json_config/common.sh@10 -- # shift 00:06:20.974 07:40:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:20.974 07:40:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:20.974 07:40:13 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:20.974 07:40:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.974 07:40:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.974 07:40:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=594764 00:06:20.974 07:40:13 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:20.974 07:40:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:20.974 Waiting for target to run... 00:06:20.974 07:40:13 json_config -- json_config/common.sh@25 -- # waitforlisten 594764 /var/tmp/spdk_tgt.sock 00:06:20.974 07:40:13 json_config -- common/autotest_common.sh@835 -- # '[' -z 594764 ']' 00:06:20.974 07:40:13 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:20.974 07:40:13 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.974 07:40:13 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:20.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:20.974 07:40:13 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.974 07:40:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.974 [2024-11-18 07:40:13.836578] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:20.974 [2024-11-18 07:40:13.836676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid594764 ] 00:06:21.564 [2024-11-18 07:40:14.332698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.564 [2024-11-18 07:40:14.374376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.822 07:40:14 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.822 07:40:14 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:21.822 07:40:14 json_config -- json_config/common.sh@26 -- # echo '' 00:06:21.822 00:06:21.822 07:40:14 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:21.822 07:40:14 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:21.822 07:40:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:21.822 07:40:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.822 07:40:14 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:21.822 07:40:14 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:21.822 07:40:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:21.822 07:40:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.822 07:40:14 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:21.822 07:40:14 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:21.822 07:40:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:25.111 07:40:18 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:25.111 07:40:18 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:25.111 07:40:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:25.111 07:40:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.111 07:40:18 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:25.111 07:40:18 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:25.111 07:40:18 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:25.111 07:40:18 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:25.111 07:40:18 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:25.111 07:40:18 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:25.111 07:40:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:25.111 07:40:18 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:25.370 07:40:18 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:25.370 07:40:18 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:25.370 07:40:18 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:25.370 07:40:18 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:25.370 07:40:18 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:25.370 07:40:18 json_config -- json_config/json_config.sh@54 -- # sort 00:06:25.370 07:40:18 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:25.370 07:40:18 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:25.370 07:40:18 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:25.370 07:40:18 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:25.370 07:40:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:25.370 07:40:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.370 07:40:18 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:25.370 07:40:18 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:25.370 07:40:18 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:25.370 07:40:18 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:25.370 07:40:18 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:25.370 07:40:18 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:25.370 07:40:18 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:25.370 07:40:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:25.370 07:40:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.370 07:40:18 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:25.370 07:40:18 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:25.370 07:40:18 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:25.370 07:40:18 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:25.370 07:40:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:25.629 MallocForNvmf0 00:06:25.629 07:40:18 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:25.629 07:40:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:25.887 MallocForNvmf1 00:06:25.887 07:40:18 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:25.887 07:40:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:26.145 [2024-11-18 07:40:19.126448] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.145 07:40:19 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:26.145 07:40:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:26.407 07:40:19 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:26.407 07:40:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:26.664 07:40:19 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:26.664 07:40:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:26.922 07:40:19 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:26.922 07:40:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:27.180 [2024-11-18 07:40:20.201987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:27.180 07:40:20 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:27.180 07:40:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:27.180 07:40:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.180 07:40:20 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:27.180 07:40:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:27.180 07:40:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.180 07:40:20 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:27.180 07:40:20 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:27.180 07:40:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:27.438 MallocBdevForConfigChangeCheck 00:06:27.697 07:40:20 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:27.697 07:40:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:27.697 07:40:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.697 07:40:20 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:27.697 07:40:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:27.955 07:40:20 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:27.955 INFO: shutting down applications... 00:06:27.955 07:40:20 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:27.955 07:40:20 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:27.955 07:40:20 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:27.955 07:40:20 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:29.856 Calling clear_iscsi_subsystem 00:06:29.856 Calling clear_nvmf_subsystem 00:06:29.856 Calling clear_nbd_subsystem 00:06:29.856 Calling clear_ublk_subsystem 00:06:29.856 Calling clear_vhost_blk_subsystem 00:06:29.856 Calling clear_vhost_scsi_subsystem 00:06:29.856 Calling clear_bdev_subsystem 00:06:29.856 07:40:22 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:29.856 07:40:22 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:29.856 07:40:22 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:29.856 07:40:22 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:29.856 07:40:22 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:29.856 07:40:22 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:30.114 07:40:23 json_config -- json_config/json_config.sh@352 -- # break 00:06:30.114 07:40:23 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:30.114 07:40:23 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:30.114 07:40:23 json_config -- json_config/common.sh@31 -- # local app=target 00:06:30.114 07:40:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:30.114 07:40:23 json_config -- json_config/common.sh@35 -- # [[ -n 594764 ]] 00:06:30.114 07:40:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 594764 00:06:30.114 07:40:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:30.114 07:40:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:30.114 07:40:23 json_config -- json_config/common.sh@41 -- # kill -0 594764 00:06:30.114 07:40:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:30.684 07:40:23 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:30.684 07:40:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:30.684 07:40:23 json_config -- json_config/common.sh@41 -- # kill -0 594764 00:06:30.684 07:40:23 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:30.684 07:40:23 json_config -- json_config/common.sh@43 -- # break 00:06:30.684 07:40:23 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:30.684 07:40:23 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:30.684 SPDK target shutdown done 00:06:30.684 07:40:23 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:30.684 INFO: relaunching applications... 00:06:30.684 07:40:23 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.684 07:40:23 json_config -- json_config/common.sh@9 -- # local app=target 00:06:30.684 07:40:23 json_config -- json_config/common.sh@10 -- # shift 00:06:30.684 07:40:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:30.684 07:40:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:30.684 07:40:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:30.684 07:40:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.684 07:40:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.684 07:40:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=596080 00:06:30.684 07:40:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:30.684 07:40:23 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.684 Waiting for target to run... 00:06:30.684 07:40:23 json_config -- json_config/common.sh@25 -- # waitforlisten 596080 /var/tmp/spdk_tgt.sock 00:06:30.684 07:40:23 json_config -- common/autotest_common.sh@835 -- # '[' -z 596080 ']' 00:06:30.684 07:40:23 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:30.684 07:40:23 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.684 07:40:23 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:30.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:30.684 07:40:23 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.684 07:40:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.684 [2024-11-18 07:40:23.595468] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:30.684 [2024-11-18 07:40:23.595598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid596080 ] 00:06:30.943 [2024-11-18 07:40:23.957894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.943 [2024-11-18 07:40:23.990070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.228 [2024-11-18 07:40:27.023701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.228 [2024-11-18 07:40:27.056155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:34.228 07:40:27 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.228 07:40:27 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:34.228 07:40:27 json_config -- json_config/common.sh@26 -- # echo '' 00:06:34.228 00:06:34.228 07:40:27 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:34.228 07:40:27 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:34.228 INFO: Checking if target configuration is the same... 00:06:34.228 07:40:27 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.228 07:40:27 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:34.228 07:40:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:34.228 + '[' 2 -ne 2 ']' 00:06:34.228 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:34.228 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:34.228 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:34.228 +++ basename /dev/fd/62 00:06:34.228 ++ mktemp /tmp/62.XXX 00:06:34.228 + tmp_file_1=/tmp/62.021 00:06:34.228 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.228 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:34.228 + tmp_file_2=/tmp/spdk_tgt_config.json.7Uy 00:06:34.228 + ret=0 00:06:34.228 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:34.486 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:34.486 + diff -u /tmp/62.021 /tmp/spdk_tgt_config.json.7Uy 00:06:34.486 + echo 'INFO: JSON config files are the same' 00:06:34.486 INFO: JSON config files are the same 00:06:34.486 + rm /tmp/62.021 /tmp/spdk_tgt_config.json.7Uy 00:06:34.486 + exit 0 00:06:34.486 07:40:27 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:34.486 07:40:27 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:34.486 INFO: changing configuration and checking if this can be detected... 00:06:34.486 07:40:27 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:34.486 07:40:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:34.744 07:40:27 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.744 07:40:27 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:34.744 07:40:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:34.744 + '[' 2 -ne 2 ']' 00:06:34.744 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:34.744 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:34.744 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:34.744 +++ basename /dev/fd/62 00:06:35.002 ++ mktemp /tmp/62.XXX 00:06:35.002 + tmp_file_1=/tmp/62.qMu 00:06:35.002 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:35.002 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:35.002 + tmp_file_2=/tmp/spdk_tgt_config.json.8Au 00:06:35.002 + ret=0 00:06:35.002 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:35.260 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:35.260 + diff -u /tmp/62.qMu /tmp/spdk_tgt_config.json.8Au 00:06:35.260 + ret=1 00:06:35.260 + echo '=== Start of file: /tmp/62.qMu ===' 00:06:35.260 + cat /tmp/62.qMu 00:06:35.260 + echo '=== End of file: /tmp/62.qMu ===' 00:06:35.260 + echo '' 00:06:35.260 + echo '=== Start of file: /tmp/spdk_tgt_config.json.8Au ===' 00:06:35.260 + cat /tmp/spdk_tgt_config.json.8Au 00:06:35.260 + echo '=== End of file: /tmp/spdk_tgt_config.json.8Au ===' 00:06:35.260 + echo '' 00:06:35.260 + rm /tmp/62.qMu /tmp/spdk_tgt_config.json.8Au 00:06:35.260 + exit 1 00:06:35.260 07:40:28 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:35.260 INFO: configuration change detected. 00:06:35.260 07:40:28 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:35.260 07:40:28 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:35.260 07:40:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.260 07:40:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.260 07:40:28 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:35.260 07:40:28 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:35.260 07:40:28 json_config -- json_config/json_config.sh@324 -- # [[ -n 596080 ]] 00:06:35.260 07:40:28 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:35.260 07:40:28 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:35.260 07:40:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.260 07:40:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.260 07:40:28 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:35.260 07:40:28 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:35.261 07:40:28 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:35.261 07:40:28 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:35.261 07:40:28 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:35.261 07:40:28 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:35.261 07:40:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:35.261 07:40:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.261 07:40:28 json_config -- json_config/json_config.sh@330 -- # killprocess 596080 00:06:35.261 07:40:28 json_config -- common/autotest_common.sh@954 -- # '[' -z 596080 ']' 00:06:35.261 07:40:28 json_config -- common/autotest_common.sh@958 -- # kill -0 596080 00:06:35.261 07:40:28 json_config -- common/autotest_common.sh@959 -- # uname 00:06:35.261 07:40:28 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.261 07:40:28 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 596080 00:06:35.519 07:40:28 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.519 07:40:28 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.519 07:40:28 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 596080' 00:06:35.519 killing process with pid 596080 00:06:35.519 07:40:28 json_config -- common/autotest_common.sh@973 -- # kill 596080 00:06:35.519 07:40:28 json_config -- common/autotest_common.sh@978 -- # wait 596080 00:06:36.892 07:40:29 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:36.892 07:40:29 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:36.892 07:40:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:36.892 07:40:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.892 07:40:29 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:36.892 07:40:29 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:36.892 INFO: Success 00:06:36.892 00:06:36.892 real 0m16.280s 00:06:36.892 user 0m18.475s 00:06:36.892 sys 0m2.017s 00:06:36.892 07:40:29 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.892 07:40:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.892 ************************************ 00:06:36.892 END TEST json_config 00:06:36.892 ************************************ 00:06:36.892 07:40:29 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:36.892 07:40:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.892 07:40:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.892 07:40:29 -- common/autotest_common.sh@10 -- # set +x 00:06:36.892 ************************************ 00:06:36.892 START TEST json_config_extra_key 00:06:36.892 ************************************ 00:06:36.892 07:40:29 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:37.153 07:40:30 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:37.153 07:40:30 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:37.153 07:40:30 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:37.153 07:40:30 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:37.153 07:40:30 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.153 07:40:30 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:37.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.153 --rc genhtml_branch_coverage=1 00:06:37.153 --rc genhtml_function_coverage=1 00:06:37.153 --rc genhtml_legend=1 00:06:37.153 --rc geninfo_all_blocks=1 00:06:37.153 --rc geninfo_unexecuted_blocks=1 00:06:37.153 00:06:37.153 ' 00:06:37.153 07:40:30 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:37.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.153 --rc genhtml_branch_coverage=1 00:06:37.153 --rc genhtml_function_coverage=1 00:06:37.153 --rc genhtml_legend=1 00:06:37.153 --rc geninfo_all_blocks=1 00:06:37.153 --rc geninfo_unexecuted_blocks=1 00:06:37.153 00:06:37.153 ' 00:06:37.153 07:40:30 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:37.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.153 --rc genhtml_branch_coverage=1 00:06:37.153 --rc genhtml_function_coverage=1 00:06:37.153 --rc genhtml_legend=1 00:06:37.153 --rc geninfo_all_blocks=1 00:06:37.153 --rc geninfo_unexecuted_blocks=1 00:06:37.153 00:06:37.153 ' 00:06:37.153 07:40:30 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:37.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.153 --rc genhtml_branch_coverage=1 00:06:37.153 --rc genhtml_function_coverage=1 00:06:37.153 --rc genhtml_legend=1 00:06:37.153 --rc geninfo_all_blocks=1 00:06:37.153 --rc geninfo_unexecuted_blocks=1 00:06:37.153 00:06:37.153 ' 00:06:37.153 07:40:30 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.153 07:40:30 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:37.153 07:40:30 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.153 07:40:30 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.153 07:40:30 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.153 07:40:30 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.153 07:40:30 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.153 07:40:30 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.153 07:40:30 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.153 07:40:30 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.153 07:40:30 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.153 07:40:30 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.153 07:40:30 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:37.153 07:40:30 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:37.153 07:40:30 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.153 07:40:30 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.153 07:40:30 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:37.153 07:40:30 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.153 07:40:30 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.153 07:40:30 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.154 07:40:30 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.154 07:40:30 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.154 07:40:30 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.154 07:40:30 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:37.154 07:40:30 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.154 07:40:30 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:37.154 07:40:30 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:37.154 07:40:30 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:37.154 07:40:30 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.154 07:40:30 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.154 07:40:30 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.154 07:40:30 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:37.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:37.154 07:40:30 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:37.154 07:40:30 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:37.154 07:40:30 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:37.154 07:40:30 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:37.154 07:40:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:37.154 07:40:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:37.154 07:40:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:37.154 07:40:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:37.154 07:40:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:37.154 07:40:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:37.154 07:40:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:37.154 07:40:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:37.154 07:40:30 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:37.154 07:40:30 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:37.154 INFO: launching applications... 00:06:37.154 07:40:30 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:37.154 07:40:30 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:37.154 07:40:30 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:37.154 07:40:30 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:37.154 07:40:30 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:37.154 07:40:30 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:37.154 07:40:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.154 07:40:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.154 07:40:30 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=597003 00:06:37.154 07:40:30 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:37.154 07:40:30 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:37.154 Waiting for target to run... 00:06:37.154 07:40:30 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 597003 /var/tmp/spdk_tgt.sock 00:06:37.154 07:40:30 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 597003 ']' 00:06:37.154 07:40:30 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:37.154 07:40:30 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.154 07:40:30 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:37.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:37.154 07:40:30 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.154 07:40:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:37.154 [2024-11-18 07:40:30.176969] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:37.154 [2024-11-18 07:40:30.177071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597003 ] 00:06:37.723 [2024-11-18 07:40:30.698917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.723 [2024-11-18 07:40:30.742714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.288 07:40:31 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.288 07:40:31 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:38.288 07:40:31 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:38.288 00:06:38.288 07:40:31 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:38.288 INFO: shutting down applications... 00:06:38.288 07:40:31 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:38.288 07:40:31 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:38.288 07:40:31 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:38.288 07:40:31 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 597003 ]] 00:06:38.288 07:40:31 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 597003 00:06:38.288 07:40:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:38.288 07:40:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:38.288 07:40:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 597003 00:06:38.288 07:40:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:38.857 07:40:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:38.857 07:40:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:38.857 07:40:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 597003 00:06:38.857 07:40:31 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:38.857 07:40:31 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:38.858 07:40:31 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:38.858 07:40:31 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:38.858 SPDK target shutdown done 00:06:38.858 07:40:31 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:38.858 Success 00:06:38.858 00:06:38.858 real 0m1.691s 00:06:38.858 user 0m1.500s 00:06:38.858 sys 0m0.629s 00:06:38.858 07:40:31 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.858 07:40:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:38.858 ************************************ 00:06:38.858 END TEST json_config_extra_key 00:06:38.858 ************************************ 00:06:38.858 07:40:31 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:38.858 07:40:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.858 07:40:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.858 07:40:31 -- common/autotest_common.sh@10 -- # set +x 00:06:38.858 ************************************ 00:06:38.858 START TEST alias_rpc 00:06:38.858 ************************************ 00:06:38.858 07:40:31 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:38.858 * Looking for test storage... 00:06:38.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:38.858 07:40:31 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.858 07:40:31 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.858 07:40:31 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.858 07:40:31 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.858 07:40:31 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:38.858 07:40:31 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.858 07:40:31 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.858 --rc genhtml_branch_coverage=1 00:06:38.858 --rc genhtml_function_coverage=1 00:06:38.858 --rc genhtml_legend=1 00:06:38.858 --rc geninfo_all_blocks=1 00:06:38.858 --rc geninfo_unexecuted_blocks=1 00:06:38.858 00:06:38.858 ' 00:06:38.858 07:40:31 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.858 --rc genhtml_branch_coverage=1 00:06:38.858 --rc genhtml_function_coverage=1 00:06:38.858 --rc genhtml_legend=1 00:06:38.858 --rc geninfo_all_blocks=1 00:06:38.858 --rc geninfo_unexecuted_blocks=1 00:06:38.858 00:06:38.858 ' 00:06:38.858 07:40:31 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.858 --rc genhtml_branch_coverage=1 00:06:38.858 --rc genhtml_function_coverage=1 00:06:38.858 --rc genhtml_legend=1 00:06:38.858 --rc geninfo_all_blocks=1 00:06:38.858 --rc geninfo_unexecuted_blocks=1 00:06:38.858 00:06:38.858 ' 00:06:38.858 07:40:31 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.858 --rc genhtml_branch_coverage=1 00:06:38.858 --rc genhtml_function_coverage=1 00:06:38.858 --rc genhtml_legend=1 00:06:38.858 --rc geninfo_all_blocks=1 00:06:38.858 --rc geninfo_unexecuted_blocks=1 00:06:38.858 00:06:38.858 ' 00:06:38.858 07:40:31 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:38.858 07:40:31 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=597199 00:06:38.858 07:40:31 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:38.858 07:40:31 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 597199 00:06:38.858 07:40:31 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 597199 ']' 00:06:38.858 07:40:31 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.858 07:40:31 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.858 07:40:31 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.858 07:40:31 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.858 07:40:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.858 [2024-11-18 07:40:31.914946] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:38.858 [2024-11-18 07:40:31.915048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597199 ] 00:06:39.117 [2024-11-18 07:40:31.982579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.117 [2024-11-18 07:40:32.027380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.375 07:40:32 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.375 07:40:32 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:39.375 07:40:32 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:39.633 07:40:32 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 597199 00:06:39.633 07:40:32 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 597199 ']' 00:06:39.633 07:40:32 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 597199 00:06:39.633 07:40:32 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:39.633 07:40:32 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.633 07:40:32 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 597199 00:06:39.633 07:40:32 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.633 07:40:32 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.633 07:40:32 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 597199' 00:06:39.633 killing process with pid 597199 00:06:39.633 07:40:32 alias_rpc -- common/autotest_common.sh@973 -- # kill 597199 00:06:39.633 07:40:32 alias_rpc -- common/autotest_common.sh@978 -- # wait 597199 00:06:39.892 00:06:39.892 real 0m1.251s 00:06:39.892 user 0m1.351s 00:06:39.892 sys 0m0.444s 00:06:39.892 07:40:32 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.892 07:40:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.892 ************************************ 00:06:39.892 END TEST alias_rpc 00:06:39.892 ************************************ 00:06:40.150 07:40:32 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:40.150 07:40:32 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:40.150 07:40:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.150 07:40:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.150 07:40:32 -- common/autotest_common.sh@10 -- # set +x 00:06:40.150 ************************************ 00:06:40.150 START TEST spdkcli_tcp 00:06:40.150 ************************************ 00:06:40.150 07:40:33 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:40.150 * Looking for test storage... 00:06:40.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:40.150 07:40:33 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.150 07:40:33 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.150 07:40:33 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:40.150 07:40:33 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:40.150 07:40:33 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.150 07:40:33 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.150 07:40:33 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.150 07:40:33 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.150 07:40:33 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.150 07:40:33 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.150 07:40:33 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.150 07:40:33 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.150 07:40:33 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.150 07:40:33 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.150 07:40:33 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.150 07:40:33 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:40.150 07:40:33 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:40.150 07:40:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.150 07:40:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.150 07:40:33 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:40.150 07:40:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:40.150 07:40:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.151 07:40:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:40.151 07:40:33 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.151 07:40:33 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:40.151 07:40:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:40.151 07:40:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.151 07:40:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:40.151 07:40:33 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.151 07:40:33 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.151 07:40:33 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.151 07:40:33 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:40.151 07:40:33 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.151 07:40:33 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:40.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.151 --rc genhtml_branch_coverage=1 00:06:40.151 --rc genhtml_function_coverage=1 00:06:40.151 --rc genhtml_legend=1 00:06:40.151 --rc geninfo_all_blocks=1 00:06:40.151 --rc geninfo_unexecuted_blocks=1 00:06:40.151 00:06:40.151 ' 00:06:40.151 07:40:33 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:40.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.151 --rc genhtml_branch_coverage=1 00:06:40.151 --rc genhtml_function_coverage=1 00:06:40.151 --rc genhtml_legend=1 00:06:40.151 --rc geninfo_all_blocks=1 00:06:40.151 --rc geninfo_unexecuted_blocks=1 00:06:40.151 00:06:40.151 ' 00:06:40.151 07:40:33 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:40.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.151 --rc genhtml_branch_coverage=1 00:06:40.151 --rc genhtml_function_coverage=1 00:06:40.151 --rc genhtml_legend=1 00:06:40.151 --rc geninfo_all_blocks=1 00:06:40.151 --rc geninfo_unexecuted_blocks=1 00:06:40.151 00:06:40.151 ' 00:06:40.151 07:40:33 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:40.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.151 --rc genhtml_branch_coverage=1 00:06:40.151 --rc genhtml_function_coverage=1 00:06:40.151 --rc genhtml_legend=1 00:06:40.151 --rc geninfo_all_blocks=1 00:06:40.151 --rc geninfo_unexecuted_blocks=1 00:06:40.151 00:06:40.151 ' 00:06:40.151 07:40:33 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:40.151 07:40:33 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:40.151 07:40:33 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:40.151 07:40:33 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:40.151 07:40:33 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:40.151 07:40:33 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:40.151 07:40:33 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:40.151 07:40:33 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:40.151 07:40:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.151 07:40:33 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=597399 00:06:40.151 07:40:33 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:40.151 07:40:33 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 597399 00:06:40.151 07:40:33 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 597399 ']' 00:06:40.151 07:40:33 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.151 07:40:33 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.151 07:40:33 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.151 07:40:33 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.151 07:40:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.151 [2024-11-18 07:40:33.222512] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:40.151 [2024-11-18 07:40:33.222598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597399 ] 00:06:40.409 [2024-11-18 07:40:33.290233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.409 [2024-11-18 07:40:33.338750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.409 [2024-11-18 07:40:33.338755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.667 07:40:33 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.667 07:40:33 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:40.667 07:40:33 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=597523 00:06:40.667 07:40:33 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:40.667 07:40:33 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:40.926 [ 00:06:40.926 "bdev_malloc_delete", 00:06:40.926 "bdev_malloc_create", 00:06:40.926 "bdev_null_resize", 00:06:40.926 "bdev_null_delete", 00:06:40.926 "bdev_null_create", 00:06:40.926 "bdev_nvme_cuse_unregister", 00:06:40.926 "bdev_nvme_cuse_register", 00:06:40.926 "bdev_opal_new_user", 00:06:40.926 "bdev_opal_set_lock_state", 00:06:40.926 "bdev_opal_delete", 00:06:40.926 "bdev_opal_get_info", 00:06:40.926 "bdev_opal_create", 00:06:40.926 "bdev_nvme_opal_revert", 00:06:40.926 "bdev_nvme_opal_init", 00:06:40.926 "bdev_nvme_send_cmd", 00:06:40.926 "bdev_nvme_set_keys", 00:06:40.926 "bdev_nvme_get_path_iostat", 00:06:40.926 "bdev_nvme_get_mdns_discovery_info", 00:06:40.926 "bdev_nvme_stop_mdns_discovery", 00:06:40.926 "bdev_nvme_start_mdns_discovery", 00:06:40.926 "bdev_nvme_set_multipath_policy", 00:06:40.926 "bdev_nvme_set_preferred_path", 00:06:40.926 "bdev_nvme_get_io_paths", 00:06:40.926 "bdev_nvme_remove_error_injection", 00:06:40.926 "bdev_nvme_add_error_injection", 00:06:40.926 "bdev_nvme_get_discovery_info", 00:06:40.926 "bdev_nvme_stop_discovery", 00:06:40.926 "bdev_nvme_start_discovery", 00:06:40.926 "bdev_nvme_get_controller_health_info", 00:06:40.926 "bdev_nvme_disable_controller", 00:06:40.926 "bdev_nvme_enable_controller", 00:06:40.926 "bdev_nvme_reset_controller", 00:06:40.926 "bdev_nvme_get_transport_statistics", 00:06:40.926 "bdev_nvme_apply_firmware", 00:06:40.926 "bdev_nvme_detach_controller", 00:06:40.926 "bdev_nvme_get_controllers", 00:06:40.926 "bdev_nvme_attach_controller", 00:06:40.926 "bdev_nvme_set_hotplug", 00:06:40.926 "bdev_nvme_set_options", 00:06:40.926 "bdev_passthru_delete", 00:06:40.926 "bdev_passthru_create", 00:06:40.926 "bdev_lvol_set_parent_bdev", 00:06:40.926 "bdev_lvol_set_parent", 00:06:40.926 "bdev_lvol_check_shallow_copy", 00:06:40.926 "bdev_lvol_start_shallow_copy", 00:06:40.926 "bdev_lvol_grow_lvstore", 00:06:40.926 "bdev_lvol_get_lvols", 00:06:40.926 "bdev_lvol_get_lvstores", 00:06:40.926 "bdev_lvol_delete", 00:06:40.926 "bdev_lvol_set_read_only", 00:06:40.926 "bdev_lvol_resize", 00:06:40.926 "bdev_lvol_decouple_parent", 00:06:40.926 "bdev_lvol_inflate", 00:06:40.926 "bdev_lvol_rename", 00:06:40.926 "bdev_lvol_clone_bdev", 00:06:40.926 "bdev_lvol_clone", 00:06:40.926 "bdev_lvol_snapshot", 00:06:40.926 "bdev_lvol_create", 00:06:40.926 "bdev_lvol_delete_lvstore", 00:06:40.926 "bdev_lvol_rename_lvstore", 00:06:40.926 "bdev_lvol_create_lvstore", 00:06:40.926 "bdev_raid_set_options", 00:06:40.926 "bdev_raid_remove_base_bdev", 00:06:40.926 "bdev_raid_add_base_bdev", 00:06:40.926 "bdev_raid_delete", 00:06:40.926 "bdev_raid_create", 00:06:40.926 "bdev_raid_get_bdevs", 00:06:40.926 "bdev_error_inject_error", 00:06:40.926 "bdev_error_delete", 00:06:40.926 "bdev_error_create", 00:06:40.926 "bdev_split_delete", 00:06:40.926 "bdev_split_create", 00:06:40.926 "bdev_delay_delete", 00:06:40.926 "bdev_delay_create", 00:06:40.926 "bdev_delay_update_latency", 00:06:40.926 "bdev_zone_block_delete", 00:06:40.926 "bdev_zone_block_create", 00:06:40.926 "blobfs_create", 00:06:40.926 "blobfs_detect", 00:06:40.926 "blobfs_set_cache_size", 00:06:40.926 "bdev_aio_delete", 00:06:40.926 "bdev_aio_rescan", 00:06:40.926 "bdev_aio_create", 00:06:40.926 "bdev_ftl_set_property", 00:06:40.926 "bdev_ftl_get_properties", 00:06:40.926 "bdev_ftl_get_stats", 00:06:40.926 "bdev_ftl_unmap", 00:06:40.926 "bdev_ftl_unload", 00:06:40.926 "bdev_ftl_delete", 00:06:40.926 "bdev_ftl_load", 00:06:40.926 "bdev_ftl_create", 00:06:40.926 "bdev_virtio_attach_controller", 00:06:40.926 "bdev_virtio_scsi_get_devices", 00:06:40.926 "bdev_virtio_detach_controller", 00:06:40.926 "bdev_virtio_blk_set_hotplug", 00:06:40.926 "bdev_iscsi_delete", 00:06:40.926 "bdev_iscsi_create", 00:06:40.926 "bdev_iscsi_set_options", 00:06:40.926 "accel_error_inject_error", 00:06:40.926 "ioat_scan_accel_module", 00:06:40.926 "dsa_scan_accel_module", 00:06:40.926 "iaa_scan_accel_module", 00:06:40.926 "vfu_virtio_create_fs_endpoint", 00:06:40.926 "vfu_virtio_create_scsi_endpoint", 00:06:40.926 "vfu_virtio_scsi_remove_target", 00:06:40.926 "vfu_virtio_scsi_add_target", 00:06:40.926 "vfu_virtio_create_blk_endpoint", 00:06:40.926 "vfu_virtio_delete_endpoint", 00:06:40.926 "keyring_file_remove_key", 00:06:40.926 "keyring_file_add_key", 00:06:40.926 "keyring_linux_set_options", 00:06:40.926 "fsdev_aio_delete", 00:06:40.926 "fsdev_aio_create", 00:06:40.926 "iscsi_get_histogram", 00:06:40.926 "iscsi_enable_histogram", 00:06:40.926 "iscsi_set_options", 00:06:40.926 "iscsi_get_auth_groups", 00:06:40.926 "iscsi_auth_group_remove_secret", 00:06:40.926 "iscsi_auth_group_add_secret", 00:06:40.926 "iscsi_delete_auth_group", 00:06:40.926 "iscsi_create_auth_group", 00:06:40.926 "iscsi_set_discovery_auth", 00:06:40.926 "iscsi_get_options", 00:06:40.926 "iscsi_target_node_request_logout", 00:06:40.926 "iscsi_target_node_set_redirect", 00:06:40.926 "iscsi_target_node_set_auth", 00:06:40.926 "iscsi_target_node_add_lun", 00:06:40.926 "iscsi_get_stats", 00:06:40.926 "iscsi_get_connections", 00:06:40.926 "iscsi_portal_group_set_auth", 00:06:40.926 "iscsi_start_portal_group", 00:06:40.926 "iscsi_delete_portal_group", 00:06:40.926 "iscsi_create_portal_group", 00:06:40.926 "iscsi_get_portal_groups", 00:06:40.926 "iscsi_delete_target_node", 00:06:40.926 "iscsi_target_node_remove_pg_ig_maps", 00:06:40.926 "iscsi_target_node_add_pg_ig_maps", 00:06:40.926 "iscsi_create_target_node", 00:06:40.926 "iscsi_get_target_nodes", 00:06:40.926 "iscsi_delete_initiator_group", 00:06:40.926 "iscsi_initiator_group_remove_initiators", 00:06:40.926 "iscsi_initiator_group_add_initiators", 00:06:40.926 "iscsi_create_initiator_group", 00:06:40.926 "iscsi_get_initiator_groups", 00:06:40.926 "nvmf_set_crdt", 00:06:40.926 "nvmf_set_config", 00:06:40.926 "nvmf_set_max_subsystems", 00:06:40.926 "nvmf_stop_mdns_prr", 00:06:40.926 "nvmf_publish_mdns_prr", 00:06:40.926 "nvmf_subsystem_get_listeners", 00:06:40.927 "nvmf_subsystem_get_qpairs", 00:06:40.927 "nvmf_subsystem_get_controllers", 00:06:40.927 "nvmf_get_stats", 00:06:40.927 "nvmf_get_transports", 00:06:40.927 "nvmf_create_transport", 00:06:40.927 "nvmf_get_targets", 00:06:40.927 "nvmf_delete_target", 00:06:40.927 "nvmf_create_target", 00:06:40.927 "nvmf_subsystem_allow_any_host", 00:06:40.927 "nvmf_subsystem_set_keys", 00:06:40.927 "nvmf_subsystem_remove_host", 00:06:40.927 "nvmf_subsystem_add_host", 00:06:40.927 "nvmf_ns_remove_host", 00:06:40.927 "nvmf_ns_add_host", 00:06:40.927 "nvmf_subsystem_remove_ns", 00:06:40.927 "nvmf_subsystem_set_ns_ana_group", 00:06:40.927 "nvmf_subsystem_add_ns", 00:06:40.927 "nvmf_subsystem_listener_set_ana_state", 00:06:40.927 "nvmf_discovery_get_referrals", 00:06:40.927 "nvmf_discovery_remove_referral", 00:06:40.927 "nvmf_discovery_add_referral", 00:06:40.927 "nvmf_subsystem_remove_listener", 00:06:40.927 "nvmf_subsystem_add_listener", 00:06:40.927 "nvmf_delete_subsystem", 00:06:40.927 "nvmf_create_subsystem", 00:06:40.927 "nvmf_get_subsystems", 00:06:40.927 "env_dpdk_get_mem_stats", 00:06:40.927 "nbd_get_disks", 00:06:40.927 "nbd_stop_disk", 00:06:40.927 "nbd_start_disk", 00:06:40.927 "ublk_recover_disk", 00:06:40.927 "ublk_get_disks", 00:06:40.927 "ublk_stop_disk", 00:06:40.927 "ublk_start_disk", 00:06:40.927 "ublk_destroy_target", 00:06:40.927 "ublk_create_target", 00:06:40.927 "virtio_blk_create_transport", 00:06:40.927 "virtio_blk_get_transports", 00:06:40.927 "vhost_controller_set_coalescing", 00:06:40.927 "vhost_get_controllers", 00:06:40.927 "vhost_delete_controller", 00:06:40.927 "vhost_create_blk_controller", 00:06:40.927 "vhost_scsi_controller_remove_target", 00:06:40.927 "vhost_scsi_controller_add_target", 00:06:40.927 "vhost_start_scsi_controller", 00:06:40.927 "vhost_create_scsi_controller", 00:06:40.927 "thread_set_cpumask", 00:06:40.927 "scheduler_set_options", 00:06:40.927 "framework_get_governor", 00:06:40.927 "framework_get_scheduler", 00:06:40.927 "framework_set_scheduler", 00:06:40.927 "framework_get_reactors", 00:06:40.927 "thread_get_io_channels", 00:06:40.927 "thread_get_pollers", 00:06:40.927 "thread_get_stats", 00:06:40.927 "framework_monitor_context_switch", 00:06:40.927 "spdk_kill_instance", 00:06:40.927 "log_enable_timestamps", 00:06:40.927 "log_get_flags", 00:06:40.927 "log_clear_flag", 00:06:40.927 "log_set_flag", 00:06:40.927 "log_get_level", 00:06:40.927 "log_set_level", 00:06:40.927 "log_get_print_level", 00:06:40.927 "log_set_print_level", 00:06:40.927 "framework_enable_cpumask_locks", 00:06:40.927 "framework_disable_cpumask_locks", 00:06:40.927 "framework_wait_init", 00:06:40.927 "framework_start_init", 00:06:40.927 "scsi_get_devices", 00:06:40.927 "bdev_get_histogram", 00:06:40.927 "bdev_enable_histogram", 00:06:40.927 "bdev_set_qos_limit", 00:06:40.927 "bdev_set_qd_sampling_period", 00:06:40.927 "bdev_get_bdevs", 00:06:40.927 "bdev_reset_iostat", 00:06:40.927 "bdev_get_iostat", 00:06:40.927 "bdev_examine", 00:06:40.927 "bdev_wait_for_examine", 00:06:40.927 "bdev_set_options", 00:06:40.927 "accel_get_stats", 00:06:40.927 "accel_set_options", 00:06:40.927 "accel_set_driver", 00:06:40.927 "accel_crypto_key_destroy", 00:06:40.927 "accel_crypto_keys_get", 00:06:40.927 "accel_crypto_key_create", 00:06:40.927 "accel_assign_opc", 00:06:40.927 "accel_get_module_info", 00:06:40.927 "accel_get_opc_assignments", 00:06:40.927 "vmd_rescan", 00:06:40.927 "vmd_remove_device", 00:06:40.927 "vmd_enable", 00:06:40.927 "sock_get_default_impl", 00:06:40.927 "sock_set_default_impl", 00:06:40.927 "sock_impl_set_options", 00:06:40.927 "sock_impl_get_options", 00:06:40.927 "iobuf_get_stats", 00:06:40.927 "iobuf_set_options", 00:06:40.927 "keyring_get_keys", 00:06:40.927 "vfu_tgt_set_base_path", 00:06:40.927 "framework_get_pci_devices", 00:06:40.927 "framework_get_config", 00:06:40.927 "framework_get_subsystems", 00:06:40.927 "fsdev_set_opts", 00:06:40.927 "fsdev_get_opts", 00:06:40.927 "trace_get_info", 00:06:40.927 "trace_get_tpoint_group_mask", 00:06:40.927 "trace_disable_tpoint_group", 00:06:40.927 "trace_enable_tpoint_group", 00:06:40.927 "trace_clear_tpoint_mask", 00:06:40.927 "trace_set_tpoint_mask", 00:06:40.927 "notify_get_notifications", 00:06:40.927 "notify_get_types", 00:06:40.927 "spdk_get_version", 00:06:40.927 "rpc_get_methods" 00:06:40.927 ] 00:06:40.927 07:40:33 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:40.927 07:40:33 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:40.927 07:40:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.927 07:40:33 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:40.927 07:40:33 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 597399 00:06:40.927 07:40:33 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 597399 ']' 00:06:40.927 07:40:33 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 597399 00:06:40.927 07:40:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:40.927 07:40:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.927 07:40:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 597399 00:06:40.927 07:40:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.927 07:40:33 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.927 07:40:33 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 597399' 00:06:40.927 killing process with pid 597399 00:06:40.927 07:40:33 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 597399 00:06:40.927 07:40:33 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 597399 00:06:41.493 00:06:41.493 real 0m1.285s 00:06:41.493 user 0m2.310s 00:06:41.493 sys 0m0.458s 00:06:41.493 07:40:34 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.493 07:40:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.493 ************************************ 00:06:41.493 END TEST spdkcli_tcp 00:06:41.493 ************************************ 00:06:41.493 07:40:34 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:41.493 07:40:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.493 07:40:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.493 07:40:34 -- common/autotest_common.sh@10 -- # set +x 00:06:41.493 ************************************ 00:06:41.493 START TEST dpdk_mem_utility 00:06:41.493 ************************************ 00:06:41.493 07:40:34 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:41.493 * Looking for test storage... 00:06:41.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:41.493 07:40:34 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:41.493 07:40:34 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:41.493 07:40:34 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:41.493 07:40:34 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.493 07:40:34 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:41.493 07:40:34 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.493 07:40:34 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:41.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.493 --rc genhtml_branch_coverage=1 00:06:41.493 --rc genhtml_function_coverage=1 00:06:41.493 --rc genhtml_legend=1 00:06:41.493 --rc geninfo_all_blocks=1 00:06:41.493 --rc geninfo_unexecuted_blocks=1 00:06:41.493 00:06:41.493 ' 00:06:41.493 07:40:34 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:41.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.493 --rc genhtml_branch_coverage=1 00:06:41.493 --rc genhtml_function_coverage=1 00:06:41.493 --rc genhtml_legend=1 00:06:41.493 --rc geninfo_all_blocks=1 00:06:41.494 --rc geninfo_unexecuted_blocks=1 00:06:41.494 00:06:41.494 ' 00:06:41.494 07:40:34 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:41.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.494 --rc genhtml_branch_coverage=1 00:06:41.494 --rc genhtml_function_coverage=1 00:06:41.494 --rc genhtml_legend=1 00:06:41.494 --rc geninfo_all_blocks=1 00:06:41.494 --rc geninfo_unexecuted_blocks=1 00:06:41.494 00:06:41.494 ' 00:06:41.494 07:40:34 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:41.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.494 --rc genhtml_branch_coverage=1 00:06:41.494 --rc genhtml_function_coverage=1 00:06:41.494 --rc genhtml_legend=1 00:06:41.494 --rc geninfo_all_blocks=1 00:06:41.494 --rc geninfo_unexecuted_blocks=1 00:06:41.494 00:06:41.494 ' 00:06:41.494 07:40:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:41.494 07:40:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=597632 00:06:41.494 07:40:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:41.494 07:40:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 597632 00:06:41.494 07:40:34 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 597632 ']' 00:06:41.494 07:40:34 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.494 07:40:34 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.494 07:40:34 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.494 07:40:34 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.494 07:40:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.494 [2024-11-18 07:40:34.540792] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:41.494 [2024-11-18 07:40:34.540898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597632 ] 00:06:41.752 [2024-11-18 07:40:34.609326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.752 [2024-11-18 07:40:34.654520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.011 07:40:34 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.011 07:40:34 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:42.011 07:40:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:42.011 07:40:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:42.011 07:40:34 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.011 07:40:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:42.011 { 00:06:42.011 "filename": "/tmp/spdk_mem_dump.txt" 00:06:42.011 } 00:06:42.011 07:40:34 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.011 07:40:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:42.011 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:42.011 1 heaps totaling size 810.000000 MiB 00:06:42.011 size: 810.000000 MiB heap id: 0 00:06:42.011 end heaps---------- 00:06:42.011 9 mempools totaling size 595.772034 MiB 00:06:42.011 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:42.011 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:42.011 size: 92.545471 MiB name: bdev_io_597632 00:06:42.011 size: 50.003479 MiB name: msgpool_597632 00:06:42.011 size: 36.509338 MiB name: fsdev_io_597632 00:06:42.011 size: 21.763794 MiB name: PDU_Pool 00:06:42.011 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:42.011 size: 4.133484 MiB name: evtpool_597632 00:06:42.011 size: 0.026123 MiB name: Session_Pool 00:06:42.011 end mempools------- 00:06:42.011 6 memzones totaling size 4.142822 MiB 00:06:42.011 size: 1.000366 MiB name: RG_ring_0_597632 00:06:42.011 size: 1.000366 MiB name: RG_ring_1_597632 00:06:42.011 size: 1.000366 MiB name: RG_ring_4_597632 00:06:42.011 size: 1.000366 MiB name: RG_ring_5_597632 00:06:42.011 size: 0.125366 MiB name: RG_ring_2_597632 00:06:42.011 size: 0.015991 MiB name: RG_ring_3_597632 00:06:42.011 end memzones------- 00:06:42.011 07:40:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:42.011 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:42.011 list of free elements. size: 10.862488 MiB 00:06:42.011 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:42.011 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:42.011 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:42.011 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:42.011 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:42.011 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:42.011 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:42.011 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:42.011 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:42.011 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:42.011 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:42.011 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:42.011 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:42.011 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:42.011 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:42.011 list of standard malloc elements. size: 199.218628 MiB 00:06:42.011 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:42.011 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:42.011 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:42.011 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:42.011 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:42.012 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:42.012 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:42.012 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:42.012 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:42.012 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:42.012 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:42.012 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:42.012 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:42.012 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:42.012 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:42.012 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:42.012 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:42.012 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:42.012 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:42.012 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:42.012 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:42.012 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:42.012 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:42.012 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:42.012 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:42.012 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:42.012 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:42.012 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:42.012 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:42.012 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:42.012 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:42.012 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:42.012 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:42.012 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:42.012 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:42.012 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:42.012 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:42.012 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:42.012 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:42.012 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:42.012 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:42.012 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:42.012 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:42.012 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:42.012 list of memzone associated elements. size: 599.918884 MiB 00:06:42.012 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:42.012 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:42.012 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:42.012 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:42.012 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:42.012 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_597632_0 00:06:42.012 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:42.012 associated memzone info: size: 48.002930 MiB name: MP_msgpool_597632_0 00:06:42.012 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:42.012 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_597632_0 00:06:42.012 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:42.012 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:42.012 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:42.012 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:42.012 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:42.012 associated memzone info: size: 3.000122 MiB name: MP_evtpool_597632_0 00:06:42.012 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:42.012 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_597632 00:06:42.012 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:42.012 associated memzone info: size: 1.007996 MiB name: MP_evtpool_597632 00:06:42.012 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:42.012 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:42.012 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:42.012 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:42.012 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:42.012 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:42.012 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:42.012 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:42.012 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:42.012 associated memzone info: size: 1.000366 MiB name: RG_ring_0_597632 00:06:42.012 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:42.012 associated memzone info: size: 1.000366 MiB name: RG_ring_1_597632 00:06:42.012 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:42.012 associated memzone info: size: 1.000366 MiB name: RG_ring_4_597632 00:06:42.012 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:42.012 associated memzone info: size: 1.000366 MiB name: RG_ring_5_597632 00:06:42.012 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:42.012 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_597632 00:06:42.012 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:42.012 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_597632 00:06:42.012 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:42.012 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:42.012 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:42.012 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:42.012 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:42.012 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:42.012 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:42.012 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_597632 00:06:42.012 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:42.012 associated memzone info: size: 0.125366 MiB name: RG_ring_2_597632 00:06:42.012 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:42.012 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:42.012 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:42.012 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:42.012 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:42.012 associated memzone info: size: 0.015991 MiB name: RG_ring_3_597632 00:06:42.012 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:42.012 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:42.012 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:42.012 associated memzone info: size: 0.000183 MiB name: MP_msgpool_597632 00:06:42.012 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:42.012 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_597632 00:06:42.012 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:42.012 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_597632 00:06:42.012 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:42.012 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:42.012 07:40:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:42.012 07:40:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 597632 00:06:42.012 07:40:35 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 597632 ']' 00:06:42.012 07:40:35 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 597632 00:06:42.012 07:40:35 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:42.012 07:40:35 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.012 07:40:35 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 597632 00:06:42.012 07:40:35 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.012 07:40:35 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.012 07:40:35 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 597632' 00:06:42.012 killing process with pid 597632 00:06:42.012 07:40:35 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 597632 00:06:42.012 07:40:35 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 597632 00:06:42.580 00:06:42.580 real 0m1.077s 00:06:42.580 user 0m1.070s 00:06:42.580 sys 0m0.413s 00:06:42.580 07:40:35 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.580 07:40:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:42.580 ************************************ 00:06:42.580 END TEST dpdk_mem_utility 00:06:42.580 ************************************ 00:06:42.580 07:40:35 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:42.580 07:40:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.580 07:40:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.580 07:40:35 -- common/autotest_common.sh@10 -- # set +x 00:06:42.580 ************************************ 00:06:42.580 START TEST event 00:06:42.580 ************************************ 00:06:42.580 07:40:35 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:42.580 * Looking for test storage... 00:06:42.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:42.580 07:40:35 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:42.580 07:40:35 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:42.580 07:40:35 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:42.580 07:40:35 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:42.580 07:40:35 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.580 07:40:35 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.580 07:40:35 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.580 07:40:35 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.580 07:40:35 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.580 07:40:35 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.580 07:40:35 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.580 07:40:35 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.580 07:40:35 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.580 07:40:35 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.580 07:40:35 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.580 07:40:35 event -- scripts/common.sh@344 -- # case "$op" in 00:06:42.580 07:40:35 event -- scripts/common.sh@345 -- # : 1 00:06:42.580 07:40:35 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.580 07:40:35 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.580 07:40:35 event -- scripts/common.sh@365 -- # decimal 1 00:06:42.580 07:40:35 event -- scripts/common.sh@353 -- # local d=1 00:06:42.580 07:40:35 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.580 07:40:35 event -- scripts/common.sh@355 -- # echo 1 00:06:42.580 07:40:35 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.580 07:40:35 event -- scripts/common.sh@366 -- # decimal 2 00:06:42.580 07:40:35 event -- scripts/common.sh@353 -- # local d=2 00:06:42.580 07:40:35 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.580 07:40:35 event -- scripts/common.sh@355 -- # echo 2 00:06:42.580 07:40:35 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.580 07:40:35 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.580 07:40:35 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.580 07:40:35 event -- scripts/common.sh@368 -- # return 0 00:06:42.580 07:40:35 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.580 07:40:35 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:42.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.580 --rc genhtml_branch_coverage=1 00:06:42.580 --rc genhtml_function_coverage=1 00:06:42.580 --rc genhtml_legend=1 00:06:42.580 --rc geninfo_all_blocks=1 00:06:42.580 --rc geninfo_unexecuted_blocks=1 00:06:42.580 00:06:42.580 ' 00:06:42.580 07:40:35 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:42.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.580 --rc genhtml_branch_coverage=1 00:06:42.580 --rc genhtml_function_coverage=1 00:06:42.580 --rc genhtml_legend=1 00:06:42.580 --rc geninfo_all_blocks=1 00:06:42.580 --rc geninfo_unexecuted_blocks=1 00:06:42.580 00:06:42.580 ' 00:06:42.580 07:40:35 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:42.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.580 --rc genhtml_branch_coverage=1 00:06:42.580 --rc genhtml_function_coverage=1 00:06:42.580 --rc genhtml_legend=1 00:06:42.580 --rc geninfo_all_blocks=1 00:06:42.580 --rc geninfo_unexecuted_blocks=1 00:06:42.580 00:06:42.580 ' 00:06:42.580 07:40:35 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:42.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.580 --rc genhtml_branch_coverage=1 00:06:42.580 --rc genhtml_function_coverage=1 00:06:42.580 --rc genhtml_legend=1 00:06:42.580 --rc geninfo_all_blocks=1 00:06:42.580 --rc geninfo_unexecuted_blocks=1 00:06:42.580 00:06:42.580 ' 00:06:42.580 07:40:35 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:42.580 07:40:35 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:42.580 07:40:35 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:42.580 07:40:35 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:42.580 07:40:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.580 07:40:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.580 ************************************ 00:06:42.580 START TEST event_perf 00:06:42.580 ************************************ 00:06:42.580 07:40:35 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:42.580 Running I/O for 1 seconds...[2024-11-18 07:40:35.662260] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:42.580 [2024-11-18 07:40:35.662335] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597873 ] 00:06:42.838 [2024-11-18 07:40:35.733146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.838 [2024-11-18 07:40:35.785246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.838 [2024-11-18 07:40:35.786900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.838 [2024-11-18 07:40:35.787062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.839 [2024-11-18 07:40:35.787066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.774 Running I/O for 1 seconds... 00:06:43.774 lcore 0: 237879 00:06:43.774 lcore 1: 237877 00:06:43.774 lcore 2: 237878 00:06:43.774 lcore 3: 237879 00:06:43.774 done. 00:06:43.774 00:06:43.774 real 0m1.183s 00:06:43.774 user 0m4.104s 00:06:43.774 sys 0m0.074s 00:06:43.774 07:40:36 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.774 07:40:36 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.774 ************************************ 00:06:43.774 END TEST event_perf 00:06:43.774 ************************************ 00:06:43.774 07:40:36 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:43.774 07:40:36 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:43.774 07:40:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.774 07:40:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.033 ************************************ 00:06:44.033 START TEST event_reactor 00:06:44.033 ************************************ 00:06:44.033 07:40:36 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:44.033 [2024-11-18 07:40:36.893386] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:44.033 [2024-11-18 07:40:36.893454] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid598080 ] 00:06:44.033 [2024-11-18 07:40:36.961619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.033 [2024-11-18 07:40:37.005008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.968 test_start 00:06:44.968 oneshot 00:06:44.968 tick 100 00:06:44.968 tick 100 00:06:44.968 tick 250 00:06:44.968 tick 100 00:06:44.968 tick 100 00:06:44.968 tick 100 00:06:44.968 tick 250 00:06:44.968 tick 500 00:06:44.968 tick 100 00:06:44.968 tick 100 00:06:44.968 tick 250 00:06:44.968 tick 100 00:06:44.968 tick 100 00:06:44.968 test_end 00:06:44.968 00:06:44.968 real 0m1.170s 00:06:44.968 user 0m1.103s 00:06:44.968 sys 0m0.064s 00:06:44.968 07:40:38 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.968 07:40:38 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:44.968 ************************************ 00:06:44.968 END TEST event_reactor 00:06:44.968 ************************************ 00:06:45.226 07:40:38 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:45.226 07:40:38 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:45.226 07:40:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.226 07:40:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.226 ************************************ 00:06:45.226 START TEST event_reactor_perf 00:06:45.226 ************************************ 00:06:45.226 07:40:38 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:45.226 [2024-11-18 07:40:38.112621] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:45.226 [2024-11-18 07:40:38.112686] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid598238 ] 00:06:45.226 [2024-11-18 07:40:38.179867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.226 [2024-11-18 07:40:38.221636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.620 test_start 00:06:46.620 test_end 00:06:46.620 Performance: 440842 events per second 00:06:46.620 00:06:46.620 real 0m1.169s 00:06:46.620 user 0m1.104s 00:06:46.620 sys 0m0.061s 00:06:46.620 07:40:39 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.620 07:40:39 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:46.620 ************************************ 00:06:46.620 END TEST event_reactor_perf 00:06:46.620 ************************************ 00:06:46.620 07:40:39 event -- event/event.sh@49 -- # uname -s 00:06:46.620 07:40:39 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:46.620 07:40:39 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:46.620 07:40:39 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.620 07:40:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.620 07:40:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.620 ************************************ 00:06:46.620 START TEST event_scheduler 00:06:46.620 ************************************ 00:06:46.620 07:40:39 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:46.620 * Looking for test storage... 00:06:46.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:46.620 07:40:39 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:46.620 07:40:39 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:46.620 07:40:39 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:46.620 07:40:39 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.620 07:40:39 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:46.620 07:40:39 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.620 07:40:39 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:46.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.620 --rc genhtml_branch_coverage=1 00:06:46.620 --rc genhtml_function_coverage=1 00:06:46.620 --rc genhtml_legend=1 00:06:46.620 --rc geninfo_all_blocks=1 00:06:46.620 --rc geninfo_unexecuted_blocks=1 00:06:46.620 00:06:46.620 ' 00:06:46.620 07:40:39 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:46.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.620 --rc genhtml_branch_coverage=1 00:06:46.620 --rc genhtml_function_coverage=1 00:06:46.620 --rc genhtml_legend=1 00:06:46.620 --rc geninfo_all_blocks=1 00:06:46.620 --rc geninfo_unexecuted_blocks=1 00:06:46.620 00:06:46.620 ' 00:06:46.620 07:40:39 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:46.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.620 --rc genhtml_branch_coverage=1 00:06:46.620 --rc genhtml_function_coverage=1 00:06:46.620 --rc genhtml_legend=1 00:06:46.620 --rc geninfo_all_blocks=1 00:06:46.620 --rc geninfo_unexecuted_blocks=1 00:06:46.620 00:06:46.620 ' 00:06:46.620 07:40:39 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:46.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.620 --rc genhtml_branch_coverage=1 00:06:46.620 --rc genhtml_function_coverage=1 00:06:46.620 --rc genhtml_legend=1 00:06:46.620 --rc geninfo_all_blocks=1 00:06:46.620 --rc geninfo_unexecuted_blocks=1 00:06:46.620 00:06:46.620 ' 00:06:46.620 07:40:39 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:46.620 07:40:39 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=598426 00:06:46.620 07:40:39 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:46.620 07:40:39 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:46.620 07:40:39 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 598426 00:06:46.620 07:40:39 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 598426 ']' 00:06:46.620 07:40:39 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.620 07:40:39 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.620 07:40:39 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.620 07:40:39 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.620 07:40:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.620 [2024-11-18 07:40:39.517138] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:46.620 [2024-11-18 07:40:39.517233] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid598426 ] 00:06:46.620 [2024-11-18 07:40:39.587981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.620 [2024-11-18 07:40:39.637557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.620 [2024-11-18 07:40:39.637616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.620 [2024-11-18 07:40:39.637681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.620 [2024-11-18 07:40:39.637683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.879 07:40:39 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.879 07:40:39 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:46.879 07:40:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:46.879 07:40:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.879 07:40:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.879 [2024-11-18 07:40:39.758668] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:46.879 [2024-11-18 07:40:39.758698] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:46.879 [2024-11-18 07:40:39.758715] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:46.879 [2024-11-18 07:40:39.758727] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:46.879 [2024-11-18 07:40:39.758737] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:46.879 07:40:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.879 07:40:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:46.879 07:40:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.879 07:40:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.879 [2024-11-18 07:40:39.856415] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:46.879 07:40:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.879 07:40:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:46.879 07:40:39 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.879 07:40:39 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.879 07:40:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.879 ************************************ 00:06:46.879 START TEST scheduler_create_thread 00:06:46.879 ************************************ 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.879 2 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.879 3 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.879 4 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.879 5 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.879 6 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.879 7 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.879 8 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.879 9 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.879 10 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.879 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.138 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.138 07:40:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:47.138 07:40:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:47.138 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.138 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.138 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.138 07:40:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:47.138 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.138 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.138 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.138 07:40:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:47.138 07:40:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:47.138 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.138 07:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.071 07:40:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.071 00:06:48.071 real 0m1.173s 00:06:48.071 user 0m0.013s 00:06:48.071 sys 0m0.001s 00:06:48.071 07:40:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.071 07:40:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.071 ************************************ 00:06:48.071 END TEST scheduler_create_thread 00:06:48.071 ************************************ 00:06:48.071 07:40:41 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:48.071 07:40:41 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 598426 00:06:48.071 07:40:41 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 598426 ']' 00:06:48.071 07:40:41 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 598426 00:06:48.071 07:40:41 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:48.071 07:40:41 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.071 07:40:41 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 598426 00:06:48.071 07:40:41 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:48.071 07:40:41 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:48.071 07:40:41 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 598426' 00:06:48.071 killing process with pid 598426 00:06:48.071 07:40:41 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 598426 00:06:48.071 07:40:41 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 598426 00:06:48.638 [2024-11-18 07:40:41.538641] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:48.897 00:06:48.897 real 0m2.414s 00:06:48.897 user 0m2.988s 00:06:48.897 sys 0m0.343s 00:06:48.897 07:40:41 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.897 07:40:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:48.897 ************************************ 00:06:48.897 END TEST event_scheduler 00:06:48.897 ************************************ 00:06:48.897 07:40:41 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:48.897 07:40:41 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:48.897 07:40:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.897 07:40:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.897 07:40:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.897 ************************************ 00:06:48.897 START TEST app_repeat 00:06:48.897 ************************************ 00:06:48.897 07:40:41 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:48.897 07:40:41 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.897 07:40:41 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.897 07:40:41 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:48.897 07:40:41 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.897 07:40:41 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:48.897 07:40:41 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:48.897 07:40:41 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:48.897 07:40:41 event.app_repeat -- event/event.sh@19 -- # repeat_pid=598742 00:06:48.897 07:40:41 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:48.897 07:40:41 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:48.897 07:40:41 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 598742' 00:06:48.897 Process app_repeat pid: 598742 00:06:48.897 07:40:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:48.897 07:40:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:48.897 spdk_app_start Round 0 00:06:48.897 07:40:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 598742 /var/tmp/spdk-nbd.sock 00:06:48.897 07:40:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 598742 ']' 00:06:48.897 07:40:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:48.897 07:40:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.897 07:40:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:48.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:48.897 07:40:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.897 07:40:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:48.897 [2024-11-18 07:40:41.817752] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:48.897 [2024-11-18 07:40:41.817817] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid598742 ] 00:06:48.897 [2024-11-18 07:40:41.885099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.897 [2024-11-18 07:40:41.933087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.897 [2024-11-18 07:40:41.933091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.155 07:40:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.155 07:40:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:49.155 07:40:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.414 Malloc0 00:06:49.414 07:40:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.671 Malloc1 00:06:49.671 07:40:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.671 07:40:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.671 07:40:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.671 07:40:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:49.671 07:40:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.671 07:40:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:49.671 07:40:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.671 07:40:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.671 07:40:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.671 07:40:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.671 07:40:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.671 07:40:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.671 07:40:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:49.671 07:40:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.671 07:40:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.671 07:40:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:49.930 /dev/nbd0 00:06:49.930 07:40:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.930 07:40:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.930 07:40:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:49.930 07:40:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:49.930 07:40:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:49.930 07:40:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:49.930 07:40:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:49.930 07:40:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:49.930 07:40:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:49.930 07:40:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:49.930 07:40:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.930 1+0 records in 00:06:49.930 1+0 records out 00:06:49.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218314 s, 18.8 MB/s 00:06:49.930 07:40:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.930 07:40:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:49.930 07:40:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.930 07:40:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:49.930 07:40:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:49.930 07:40:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.930 07:40:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.930 07:40:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:50.188 /dev/nbd1 00:06:50.446 07:40:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:50.446 07:40:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:50.446 07:40:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:50.446 07:40:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:50.446 07:40:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:50.446 07:40:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:50.446 07:40:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:50.446 07:40:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:50.446 07:40:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:50.446 07:40:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:50.446 07:40:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.446 1+0 records in 00:06:50.446 1+0 records out 00:06:50.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177662 s, 23.1 MB/s 00:06:50.446 07:40:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.446 07:40:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:50.446 07:40:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.446 07:40:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:50.446 07:40:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:50.446 07:40:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.446 07:40:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.446 07:40:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.446 07:40:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.446 07:40:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.704 { 00:06:50.704 "nbd_device": "/dev/nbd0", 00:06:50.704 "bdev_name": "Malloc0" 00:06:50.704 }, 00:06:50.704 { 00:06:50.704 "nbd_device": "/dev/nbd1", 00:06:50.704 "bdev_name": "Malloc1" 00:06:50.704 } 00:06:50.704 ]' 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.704 { 00:06:50.704 "nbd_device": "/dev/nbd0", 00:06:50.704 "bdev_name": "Malloc0" 00:06:50.704 }, 00:06:50.704 { 00:06:50.704 "nbd_device": "/dev/nbd1", 00:06:50.704 "bdev_name": "Malloc1" 00:06:50.704 } 00:06:50.704 ]' 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:50.704 /dev/nbd1' 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:50.704 /dev/nbd1' 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:50.704 256+0 records in 00:06:50.704 256+0 records out 00:06:50.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00522431 s, 201 MB/s 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:50.704 256+0 records in 00:06:50.704 256+0 records out 00:06:50.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202054 s, 51.9 MB/s 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:50.704 256+0 records in 00:06:50.704 256+0 records out 00:06:50.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219835 s, 47.7 MB/s 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.704 07:40:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:50.705 07:40:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:50.705 07:40:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.705 07:40:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.963 07:40:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.963 07:40:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.963 07:40:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.963 07:40:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.963 07:40:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.963 07:40:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.963 07:40:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.963 07:40:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.963 07:40:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.963 07:40:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:51.221 07:40:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:51.221 07:40:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:51.221 07:40:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:51.221 07:40:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.221 07:40:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.221 07:40:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:51.221 07:40:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:51.221 07:40:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.221 07:40:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.221 07:40:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.221 07:40:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.479 07:40:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:51.479 07:40:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:51.479 07:40:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.479 07:40:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:51.479 07:40:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:51.479 07:40:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.479 07:40:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:51.479 07:40:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:51.479 07:40:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:51.479 07:40:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:51.479 07:40:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:51.479 07:40:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:51.479 07:40:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:52.045 07:40:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:52.045 [2024-11-18 07:40:45.048222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.045 [2024-11-18 07:40:45.093327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.045 [2024-11-18 07:40:45.093331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.304 [2024-11-18 07:40:45.152294] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:52.304 [2024-11-18 07:40:45.152364] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:54.833 07:40:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:54.833 07:40:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:54.833 spdk_app_start Round 1 00:06:54.833 07:40:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 598742 /var/tmp/spdk-nbd.sock 00:06:54.833 07:40:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 598742 ']' 00:06:54.833 07:40:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.833 07:40:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.833 07:40:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.833 07:40:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.833 07:40:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:55.091 07:40:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.091 07:40:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:55.091 07:40:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.348 Malloc0 00:06:55.348 07:40:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.605 Malloc1 00:06:55.863 07:40:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.863 07:40:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.863 07:40:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.863 07:40:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:55.863 07:40:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.863 07:40:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:55.863 07:40:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.863 07:40:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.863 07:40:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.863 07:40:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:55.863 07:40:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.863 07:40:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:55.863 07:40:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:55.863 07:40:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:55.863 07:40:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.863 07:40:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:56.121 /dev/nbd0 00:06:56.121 07:40:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:56.121 07:40:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:56.121 07:40:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:56.121 07:40:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:56.121 07:40:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.121 07:40:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.121 07:40:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:56.121 07:40:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:56.121 07:40:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.121 07:40:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.121 07:40:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.121 1+0 records in 00:06:56.121 1+0 records out 00:06:56.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233057 s, 17.6 MB/s 00:06:56.121 07:40:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.121 07:40:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:56.121 07:40:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.121 07:40:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.121 07:40:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:56.121 07:40:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.121 07:40:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.121 07:40:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:56.378 /dev/nbd1 00:06:56.378 07:40:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:56.378 07:40:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:56.378 07:40:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:56.378 07:40:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:56.378 07:40:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.378 07:40:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.378 07:40:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:56.378 07:40:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:56.378 07:40:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.378 07:40:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.378 07:40:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.378 1+0 records in 00:06:56.378 1+0 records out 00:06:56.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222151 s, 18.4 MB/s 00:06:56.378 07:40:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.378 07:40:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:56.378 07:40:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.378 07:40:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.378 07:40:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:56.378 07:40:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.378 07:40:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.378 07:40:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.379 07:40:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.379 07:40:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.637 07:40:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:56.637 { 00:06:56.637 "nbd_device": "/dev/nbd0", 00:06:56.637 "bdev_name": "Malloc0" 00:06:56.637 }, 00:06:56.637 { 00:06:56.637 "nbd_device": "/dev/nbd1", 00:06:56.637 "bdev_name": "Malloc1" 00:06:56.637 } 00:06:56.637 ]' 00:06:56.637 07:40:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:56.637 { 00:06:56.637 "nbd_device": "/dev/nbd0", 00:06:56.637 "bdev_name": "Malloc0" 00:06:56.637 }, 00:06:56.637 { 00:06:56.637 "nbd_device": "/dev/nbd1", 00:06:56.637 "bdev_name": "Malloc1" 00:06:56.637 } 00:06:56.637 ]' 00:06:56.637 07:40:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.637 07:40:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:56.637 /dev/nbd1' 00:06:56.637 07:40:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:56.637 /dev/nbd1' 00:06:56.637 07:40:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.637 07:40:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:56.637 07:40:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:56.637 07:40:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:56.637 07:40:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:56.637 07:40:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:56.637 07:40:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.637 07:40:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.637 07:40:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:56.637 07:40:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.637 07:40:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:56.637 07:40:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:56.637 256+0 records in 00:06:56.637 256+0 records out 00:06:56.637 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515468 s, 203 MB/s 00:06:56.637 07:40:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.637 07:40:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:56.637 256+0 records in 00:06:56.637 256+0 records out 00:06:56.637 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199888 s, 52.5 MB/s 00:06:56.637 07:40:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.637 07:40:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:56.895 256+0 records in 00:06:56.895 256+0 records out 00:06:56.895 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225373 s, 46.5 MB/s 00:06:56.895 07:40:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:56.895 07:40:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.895 07:40:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.895 07:40:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:56.895 07:40:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.895 07:40:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:56.895 07:40:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:56.895 07:40:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.895 07:40:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:56.895 07:40:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.895 07:40:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:56.895 07:40:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.895 07:40:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:56.895 07:40:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.895 07:40:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.895 07:40:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.895 07:40:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:56.895 07:40:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.895 07:40:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:57.153 07:40:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.153 07:40:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.153 07:40:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.153 07:40:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.153 07:40:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.153 07:40:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.153 07:40:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.153 07:40:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.153 07:40:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.153 07:40:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.411 07:40:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.411 07:40:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.411 07:40:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.411 07:40:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.411 07:40:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.411 07:40:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.411 07:40:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.411 07:40:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.411 07:40:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.411 07:40:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.411 07:40:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.670 07:40:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.670 07:40:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.670 07:40:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.670 07:40:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.670 07:40:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.670 07:40:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.670 07:40:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:57.670 07:40:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.670 07:40:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.670 07:40:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:57.670 07:40:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:57.670 07:40:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:57.670 07:40:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:57.928 07:40:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:58.186 [2024-11-18 07:40:51.164571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.186 [2024-11-18 07:40:51.208698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.186 [2024-11-18 07:40:51.208698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.186 [2024-11-18 07:40:51.268233] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:58.186 [2024-11-18 07:40:51.268306] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:01.470 07:40:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:01.470 07:40:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:01.470 spdk_app_start Round 2 00:07:01.470 07:40:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 598742 /var/tmp/spdk-nbd.sock 00:07:01.470 07:40:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 598742 ']' 00:07:01.470 07:40:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.470 07:40:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.470 07:40:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.470 07:40:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.470 07:40:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:01.470 07:40:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.470 07:40:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:01.470 07:40:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.470 Malloc0 00:07:01.470 07:40:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.728 Malloc1 00:07:01.986 07:40:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.986 07:40:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.986 07:40:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.986 07:40:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:01.986 07:40:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.986 07:40:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:01.986 07:40:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.986 07:40:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.986 07:40:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.986 07:40:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:01.986 07:40:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.986 07:40:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:01.986 07:40:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:01.986 07:40:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:01.986 07:40:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.986 07:40:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:02.245 /dev/nbd0 00:07:02.245 07:40:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:02.245 07:40:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:02.245 07:40:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:02.245 07:40:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:02.245 07:40:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.245 07:40:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.245 07:40:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:02.245 07:40:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:02.245 07:40:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.245 07:40:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.245 07:40:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:02.245 1+0 records in 00:07:02.245 1+0 records out 00:07:02.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222908 s, 18.4 MB/s 00:07:02.245 07:40:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:02.245 07:40:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:02.245 07:40:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:02.245 07:40:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.245 07:40:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:02.245 07:40:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.245 07:40:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.245 07:40:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:02.503 /dev/nbd1 00:07:02.503 07:40:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:02.503 07:40:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:02.503 07:40:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:02.503 07:40:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:02.503 07:40:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.503 07:40:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.503 07:40:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:02.503 07:40:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:02.503 07:40:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.503 07:40:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.503 07:40:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:02.503 1+0 records in 00:07:02.503 1+0 records out 00:07:02.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197077 s, 20.8 MB/s 00:07:02.503 07:40:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:02.503 07:40:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:02.503 07:40:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:02.503 07:40:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.503 07:40:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:02.503 07:40:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.503 07:40:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.503 07:40:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.503 07:40:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.503 07:40:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.761 07:40:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:02.761 { 00:07:02.761 "nbd_device": "/dev/nbd0", 00:07:02.761 "bdev_name": "Malloc0" 00:07:02.761 }, 00:07:02.761 { 00:07:02.761 "nbd_device": "/dev/nbd1", 00:07:02.761 "bdev_name": "Malloc1" 00:07:02.761 } 00:07:02.761 ]' 00:07:02.761 07:40:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:02.761 { 00:07:02.761 "nbd_device": "/dev/nbd0", 00:07:02.761 "bdev_name": "Malloc0" 00:07:02.761 }, 00:07:02.761 { 00:07:02.761 "nbd_device": "/dev/nbd1", 00:07:02.761 "bdev_name": "Malloc1" 00:07:02.761 } 00:07:02.761 ]' 00:07:02.761 07:40:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.761 07:40:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:02.761 /dev/nbd1' 00:07:02.761 07:40:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:02.761 /dev/nbd1' 00:07:02.761 07:40:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.761 07:40:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:02.761 07:40:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:02.761 07:40:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:02.761 07:40:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:02.761 07:40:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:02.761 07:40:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.761 07:40:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.761 07:40:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:02.762 256+0 records in 00:07:02.762 256+0 records out 00:07:02.762 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00540639 s, 194 MB/s 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:02.762 256+0 records in 00:07:02.762 256+0 records out 00:07:02.762 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200758 s, 52.2 MB/s 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:02.762 256+0 records in 00:07:02.762 256+0 records out 00:07:02.762 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216077 s, 48.5 MB/s 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.762 07:40:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:03.328 07:40:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.328 07:40:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.328 07:40:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.328 07:40:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.328 07:40:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.328 07:40:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.328 07:40:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:03.328 07:40:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.328 07:40:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.328 07:40:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:03.586 07:40:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:03.586 07:40:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:03.586 07:40:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:03.586 07:40:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.586 07:40:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.586 07:40:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:03.586 07:40:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:03.586 07:40:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.586 07:40:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.586 07:40:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.586 07:40:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.843 07:40:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.843 07:40:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.843 07:40:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.843 07:40:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.843 07:40:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.843 07:40:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.843 07:40:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:03.843 07:40:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.843 07:40:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.843 07:40:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:03.843 07:40:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:03.843 07:40:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:03.843 07:40:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:04.101 07:40:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:04.359 [2024-11-18 07:40:57.227987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:04.359 [2024-11-18 07:40:57.274026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.359 [2024-11-18 07:40:57.274031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.359 [2024-11-18 07:40:57.332351] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:04.359 [2024-11-18 07:40:57.332424] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:07.643 07:41:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 598742 /var/tmp/spdk-nbd.sock 00:07:07.643 07:41:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 598742 ']' 00:07:07.643 07:41:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:07.643 07:41:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.643 07:41:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:07.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:07.643 07:41:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.643 07:41:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:07.643 07:41:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.643 07:41:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:07.643 07:41:00 event.app_repeat -- event/event.sh@39 -- # killprocess 598742 00:07:07.643 07:41:00 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 598742 ']' 00:07:07.643 07:41:00 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 598742 00:07:07.643 07:41:00 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:07.643 07:41:00 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.643 07:41:00 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 598742 00:07:07.643 07:41:00 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.643 07:41:00 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.643 07:41:00 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 598742' 00:07:07.643 killing process with pid 598742 00:07:07.643 07:41:00 event.app_repeat -- common/autotest_common.sh@973 -- # kill 598742 00:07:07.643 07:41:00 event.app_repeat -- common/autotest_common.sh@978 -- # wait 598742 00:07:07.643 spdk_app_start is called in Round 0. 00:07:07.643 Shutdown signal received, stop current app iteration 00:07:07.643 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 reinitialization... 00:07:07.643 spdk_app_start is called in Round 1. 00:07:07.643 Shutdown signal received, stop current app iteration 00:07:07.643 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 reinitialization... 00:07:07.643 spdk_app_start is called in Round 2. 00:07:07.643 Shutdown signal received, stop current app iteration 00:07:07.643 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 reinitialization... 00:07:07.643 spdk_app_start is called in Round 3. 00:07:07.643 Shutdown signal received, stop current app iteration 00:07:07.643 07:41:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:07.643 07:41:00 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:07.643 00:07:07.643 real 0m18.741s 00:07:07.643 user 0m41.654s 00:07:07.643 sys 0m3.223s 00:07:07.643 07:41:00 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.643 07:41:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:07.643 ************************************ 00:07:07.643 END TEST app_repeat 00:07:07.643 ************************************ 00:07:07.643 07:41:00 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:07.643 07:41:00 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:07.643 07:41:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.643 07:41:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.643 07:41:00 event -- common/autotest_common.sh@10 -- # set +x 00:07:07.643 ************************************ 00:07:07.643 START TEST cpu_locks 00:07:07.643 ************************************ 00:07:07.643 07:41:00 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:07.643 * Looking for test storage... 00:07:07.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:07.643 07:41:00 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.643 07:41:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.643 07:41:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.643 07:41:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.643 07:41:00 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.902 07:41:00 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:07.902 07:41:00 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.902 07:41:00 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.902 --rc genhtml_branch_coverage=1 00:07:07.902 --rc genhtml_function_coverage=1 00:07:07.902 --rc genhtml_legend=1 00:07:07.902 --rc geninfo_all_blocks=1 00:07:07.902 --rc geninfo_unexecuted_blocks=1 00:07:07.902 00:07:07.902 ' 00:07:07.902 07:41:00 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.902 --rc genhtml_branch_coverage=1 00:07:07.902 --rc genhtml_function_coverage=1 00:07:07.902 --rc genhtml_legend=1 00:07:07.902 --rc geninfo_all_blocks=1 00:07:07.902 --rc geninfo_unexecuted_blocks=1 00:07:07.902 00:07:07.902 ' 00:07:07.902 07:41:00 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.902 --rc genhtml_branch_coverage=1 00:07:07.902 --rc genhtml_function_coverage=1 00:07:07.902 --rc genhtml_legend=1 00:07:07.902 --rc geninfo_all_blocks=1 00:07:07.902 --rc geninfo_unexecuted_blocks=1 00:07:07.902 00:07:07.902 ' 00:07:07.902 07:41:00 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.902 --rc genhtml_branch_coverage=1 00:07:07.902 --rc genhtml_function_coverage=1 00:07:07.902 --rc genhtml_legend=1 00:07:07.902 --rc geninfo_all_blocks=1 00:07:07.902 --rc geninfo_unexecuted_blocks=1 00:07:07.902 00:07:07.902 ' 00:07:07.902 07:41:00 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:07.902 07:41:00 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:07.902 07:41:00 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:07.902 07:41:00 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:07.902 07:41:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.902 07:41:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.902 07:41:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.902 ************************************ 00:07:07.902 START TEST default_locks 00:07:07.902 ************************************ 00:07:07.902 07:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:07.902 07:41:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=601235 00:07:07.902 07:41:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:07.902 07:41:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 601235 00:07:07.902 07:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 601235 ']' 00:07:07.902 07:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.902 07:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.902 07:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.902 07:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.902 07:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.902 [2024-11-18 07:41:00.811621] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:07.902 [2024-11-18 07:41:00.811706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601235 ] 00:07:07.902 [2024-11-18 07:41:00.877392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.902 [2024-11-18 07:41:00.927002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.160 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.160 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:08.160 07:41:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 601235 00:07:08.160 07:41:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 601235 00:07:08.160 07:41:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:08.418 lslocks: write error 00:07:08.418 07:41:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 601235 00:07:08.418 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 601235 ']' 00:07:08.418 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 601235 00:07:08.418 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:08.418 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.418 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 601235 00:07:08.676 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.676 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.676 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 601235' 00:07:08.676 killing process with pid 601235 00:07:08.676 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 601235 00:07:08.676 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 601235 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 601235 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 601235 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 601235 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 601235 ']' 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (601235) - No such process 00:07:08.936 ERROR: process (pid: 601235) is no longer running 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:08.936 00:07:08.936 real 0m1.140s 00:07:08.936 user 0m1.105s 00:07:08.936 sys 0m0.524s 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.936 07:41:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.936 ************************************ 00:07:08.936 END TEST default_locks 00:07:08.936 ************************************ 00:07:08.936 07:41:01 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:08.936 07:41:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.936 07:41:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.936 07:41:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.936 ************************************ 00:07:08.936 START TEST default_locks_via_rpc 00:07:08.936 ************************************ 00:07:08.936 07:41:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:08.936 07:41:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=601398 00:07:08.936 07:41:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.936 07:41:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 601398 00:07:08.936 07:41:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 601398 ']' 00:07:08.936 07:41:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.936 07:41:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.936 07:41:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.936 07:41:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.936 07:41:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.936 [2024-11-18 07:41:02.008303] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:08.936 [2024-11-18 07:41:02.008410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601398 ] 00:07:09.195 [2024-11-18 07:41:02.074687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.195 [2024-11-18 07:41:02.117052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.453 07:41:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.453 07:41:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:09.453 07:41:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:09.453 07:41:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.453 07:41:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.453 07:41:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.453 07:41:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:09.453 07:41:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:09.453 07:41:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:09.453 07:41:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:09.453 07:41:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:09.453 07:41:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.453 07:41:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.453 07:41:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.453 07:41:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 601398 00:07:09.453 07:41:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 601398 00:07:09.453 07:41:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:09.711 07:41:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 601398 00:07:09.711 07:41:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 601398 ']' 00:07:09.711 07:41:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 601398 00:07:09.711 07:41:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:09.711 07:41:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.711 07:41:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 601398 00:07:09.711 07:41:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.711 07:41:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.711 07:41:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 601398' 00:07:09.711 killing process with pid 601398 00:07:09.711 07:41:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 601398 00:07:09.711 07:41:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 601398 00:07:09.969 00:07:09.969 real 0m1.099s 00:07:09.969 user 0m1.061s 00:07:09.969 sys 0m0.488s 00:07:09.969 07:41:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.969 07:41:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.969 ************************************ 00:07:09.969 END TEST default_locks_via_rpc 00:07:09.969 ************************************ 00:07:10.228 07:41:03 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:10.228 07:41:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.228 07:41:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.228 07:41:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.228 ************************************ 00:07:10.228 START TEST non_locking_app_on_locked_coremask 00:07:10.228 ************************************ 00:07:10.228 07:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:10.228 07:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=601560 00:07:10.228 07:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.228 07:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 601560 /var/tmp/spdk.sock 00:07:10.228 07:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 601560 ']' 00:07:10.228 07:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.228 07:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.228 07:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.228 07:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.228 07:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.228 [2024-11-18 07:41:03.158626] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:10.228 [2024-11-18 07:41:03.158734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601560 ] 00:07:10.228 [2024-11-18 07:41:03.225636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.228 [2024-11-18 07:41:03.268206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.486 07:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.486 07:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:10.486 07:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=601573 00:07:10.486 07:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:10.486 07:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 601573 /var/tmp/spdk2.sock 00:07:10.486 07:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 601573 ']' 00:07:10.486 07:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.486 07:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.486 07:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.487 07:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.487 07:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.487 [2024-11-18 07:41:03.569301] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:10.487 [2024-11-18 07:41:03.569399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601573 ] 00:07:10.745 [2024-11-18 07:41:03.671507] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.745 [2024-11-18 07:41:03.671556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.745 [2024-11-18 07:41:03.768000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.312 07:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.312 07:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:11.312 07:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 601560 00:07:11.312 07:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 601560 00:07:11.312 07:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.878 lslocks: write error 00:07:11.878 07:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 601560 00:07:11.878 07:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 601560 ']' 00:07:11.878 07:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 601560 00:07:11.878 07:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:11.878 07:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.878 07:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 601560 00:07:11.878 07:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.878 07:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.878 07:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 601560' 00:07:11.878 killing process with pid 601560 00:07:11.878 07:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 601560 00:07:11.878 07:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 601560 00:07:12.812 07:41:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 601573 00:07:12.812 07:41:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 601573 ']' 00:07:12.812 07:41:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 601573 00:07:12.812 07:41:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:12.812 07:41:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.812 07:41:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 601573 00:07:12.812 07:41:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.812 07:41:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.812 07:41:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 601573' 00:07:12.812 killing process with pid 601573 00:07:12.812 07:41:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 601573 00:07:12.812 07:41:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 601573 00:07:13.071 00:07:13.071 real 0m2.859s 00:07:13.071 user 0m2.888s 00:07:13.071 sys 0m1.011s 00:07:13.071 07:41:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.071 07:41:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.071 ************************************ 00:07:13.071 END TEST non_locking_app_on_locked_coremask 00:07:13.071 ************************************ 00:07:13.071 07:41:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:13.071 07:41:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.071 07:41:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.071 07:41:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.071 ************************************ 00:07:13.071 START TEST locking_app_on_unlocked_coremask 00:07:13.071 ************************************ 00:07:13.071 07:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:13.071 07:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=601872 00:07:13.071 07:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:13.071 07:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 601872 /var/tmp/spdk.sock 00:07:13.071 07:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 601872 ']' 00:07:13.071 07:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.071 07:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.071 07:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.071 07:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.071 07:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.071 [2024-11-18 07:41:06.068890] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:13.071 [2024-11-18 07:41:06.068983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601872 ] 00:07:13.071 [2024-11-18 07:41:06.137098] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:13.071 [2024-11-18 07:41:06.137129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.330 [2024-11-18 07:41:06.183902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.588 07:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.588 07:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:13.588 07:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=602001 00:07:13.588 07:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:13.588 07:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 602001 /var/tmp/spdk2.sock 00:07:13.588 07:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 602001 ']' 00:07:13.588 07:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.588 07:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.588 07:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.588 07:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.588 07:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.588 [2024-11-18 07:41:06.495709] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:13.588 [2024-11-18 07:41:06.495807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602001 ] 00:07:13.588 [2024-11-18 07:41:06.594026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.847 [2024-11-18 07:41:06.683180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.414 07:41:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.414 07:41:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:14.414 07:41:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 602001 00:07:14.414 07:41:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 602001 00:07:14.414 07:41:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:14.980 lslocks: write error 00:07:14.980 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 601872 00:07:14.980 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 601872 ']' 00:07:14.980 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 601872 00:07:14.980 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:14.980 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.980 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 601872 00:07:15.239 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.240 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.240 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 601872' 00:07:15.240 killing process with pid 601872 00:07:15.240 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 601872 00:07:15.240 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 601872 00:07:15.860 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 602001 00:07:15.860 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 602001 ']' 00:07:15.860 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 602001 00:07:15.860 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:15.860 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.860 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 602001 00:07:15.860 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.860 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.860 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 602001' 00:07:15.860 killing process with pid 602001 00:07:15.860 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 602001 00:07:15.860 07:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 602001 00:07:16.142 00:07:16.142 real 0m3.202s 00:07:16.142 user 0m3.469s 00:07:16.142 sys 0m1.054s 00:07:16.142 07:41:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.142 07:41:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.142 ************************************ 00:07:16.142 END TEST locking_app_on_unlocked_coremask 00:07:16.142 ************************************ 00:07:16.399 07:41:09 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:16.399 07:41:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.399 07:41:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.399 07:41:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.399 ************************************ 00:07:16.399 START TEST locking_app_on_locked_coremask 00:07:16.399 ************************************ 00:07:16.399 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:16.399 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=602311 00:07:16.399 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:16.399 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 602311 /var/tmp/spdk.sock 00:07:16.399 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 602311 ']' 00:07:16.399 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.399 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.399 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.399 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.399 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.399 [2024-11-18 07:41:09.323501] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:16.400 [2024-11-18 07:41:09.323592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602311 ] 00:07:16.400 [2024-11-18 07:41:09.390266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.400 [2024-11-18 07:41:09.439467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.657 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.657 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:16.657 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=602441 00:07:16.657 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:16.657 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 602441 /var/tmp/spdk2.sock 00:07:16.657 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:16.657 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 602441 /var/tmp/spdk2.sock 00:07:16.657 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:16.657 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.657 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:16.657 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.657 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 602441 /var/tmp/spdk2.sock 00:07:16.657 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 602441 ']' 00:07:16.657 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.657 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.657 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.657 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.657 07:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.657 [2024-11-18 07:41:09.737093] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:16.657 [2024-11-18 07:41:09.737167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602441 ] 00:07:16.915 [2024-11-18 07:41:09.836221] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 602311 has claimed it. 00:07:16.915 [2024-11-18 07:41:09.836271] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:17.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (602441) - No such process 00:07:17.480 ERROR: process (pid: 602441) is no longer running 00:07:17.480 07:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.480 07:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:17.480 07:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:17.480 07:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:17.480 07:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:17.480 07:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:17.480 07:41:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 602311 00:07:17.480 07:41:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 602311 00:07:17.480 07:41:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:17.738 lslocks: write error 00:07:17.738 07:41:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 602311 00:07:17.738 07:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 602311 ']' 00:07:17.738 07:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 602311 00:07:17.738 07:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:17.738 07:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.738 07:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 602311 00:07:17.738 07:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.738 07:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.738 07:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 602311' 00:07:17.738 killing process with pid 602311 00:07:17.738 07:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 602311 00:07:17.738 07:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 602311 00:07:18.305 00:07:18.305 real 0m1.899s 00:07:18.305 user 0m2.117s 00:07:18.305 sys 0m0.620s 00:07:18.305 07:41:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.305 07:41:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.305 ************************************ 00:07:18.305 END TEST locking_app_on_locked_coremask 00:07:18.305 ************************************ 00:07:18.305 07:41:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:18.305 07:41:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.305 07:41:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.305 07:41:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.305 ************************************ 00:07:18.305 START TEST locking_overlapped_coremask 00:07:18.305 ************************************ 00:07:18.305 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:18.305 07:41:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=602603 00:07:18.305 07:41:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:18.305 07:41:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 602603 /var/tmp/spdk.sock 00:07:18.305 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 602603 ']' 00:07:18.305 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.305 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.305 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.306 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.306 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.306 [2024-11-18 07:41:11.276053] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:18.306 [2024-11-18 07:41:11.276141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602603 ] 00:07:18.306 [2024-11-18 07:41:11.347064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.564 [2024-11-18 07:41:11.399572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.564 [2024-11-18 07:41:11.399599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.564 [2024-11-18 07:41:11.399604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.823 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.823 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:18.823 07:41:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=602619 00:07:18.823 07:41:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 602619 /var/tmp/spdk2.sock 00:07:18.823 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:18.823 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 602619 /var/tmp/spdk2.sock 00:07:18.823 07:41:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:18.823 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:18.823 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.823 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:18.823 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.823 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 602619 /var/tmp/spdk2.sock 00:07:18.823 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 602619 ']' 00:07:18.823 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.823 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.823 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.823 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.823 07:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.823 [2024-11-18 07:41:11.736236] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:18.823 [2024-11-18 07:41:11.736356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602619 ] 00:07:18.823 [2024-11-18 07:41:11.848800] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 602603 has claimed it. 00:07:18.823 [2024-11-18 07:41:11.848859] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:19.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (602619) - No such process 00:07:19.389 ERROR: process (pid: 602619) is no longer running 00:07:19.389 07:41:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.389 07:41:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:19.389 07:41:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:19.389 07:41:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.389 07:41:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:19.389 07:41:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.389 07:41:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:19.389 07:41:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:19.389 07:41:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:19.389 07:41:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:19.389 07:41:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 602603 00:07:19.389 07:41:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 602603 ']' 00:07:19.389 07:41:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 602603 00:07:19.389 07:41:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:19.389 07:41:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.389 07:41:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 602603 00:07:19.647 07:41:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.647 07:41:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.647 07:41:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 602603' 00:07:19.647 killing process with pid 602603 00:07:19.647 07:41:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 602603 00:07:19.647 07:41:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 602603 00:07:19.905 00:07:19.905 real 0m1.649s 00:07:19.905 user 0m4.647s 00:07:19.905 sys 0m0.472s 00:07:19.905 07:41:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.905 07:41:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.905 ************************************ 00:07:19.905 END TEST locking_overlapped_coremask 00:07:19.905 ************************************ 00:07:19.905 07:41:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:19.905 07:41:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.905 07:41:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.905 07:41:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.905 ************************************ 00:07:19.905 START TEST locking_overlapped_coremask_via_rpc 00:07:19.905 ************************************ 00:07:19.905 07:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:19.905 07:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=602889 00:07:19.905 07:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:19.905 07:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 602889 /var/tmp/spdk.sock 00:07:19.905 07:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 602889 ']' 00:07:19.905 07:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.905 07:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.905 07:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.905 07:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.905 07:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.905 [2024-11-18 07:41:12.972428] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:19.905 [2024-11-18 07:41:12.972524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602889 ] 00:07:20.164 [2024-11-18 07:41:13.040190] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.164 [2024-11-18 07:41:13.040234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.164 [2024-11-18 07:41:13.093513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.164 [2024-11-18 07:41:13.093544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.164 [2024-11-18 07:41:13.093548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.422 07:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.422 07:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:20.422 07:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=602908 00:07:20.422 07:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:20.422 07:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 602908 /var/tmp/spdk2.sock 00:07:20.422 07:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 602908 ']' 00:07:20.422 07:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.422 07:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.422 07:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.422 07:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.422 07:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.422 [2024-11-18 07:41:13.422107] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:20.422 [2024-11-18 07:41:13.422205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602908 ] 00:07:20.680 [2024-11-18 07:41:13.528898] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.680 [2024-11-18 07:41:13.528942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.680 [2024-11-18 07:41:13.625347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.680 [2024-11-18 07:41:13.625412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:20.680 [2024-11-18 07:41:13.625414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.614 [2024-11-18 07:41:14.398613] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 602889 has claimed it. 00:07:21.614 request: 00:07:21.614 { 00:07:21.614 "method": "framework_enable_cpumask_locks", 00:07:21.614 "req_id": 1 00:07:21.614 } 00:07:21.614 Got JSON-RPC error response 00:07:21.614 response: 00:07:21.614 { 00:07:21.614 "code": -32603, 00:07:21.614 "message": "Failed to claim CPU core: 2" 00:07:21.614 } 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 602889 /var/tmp/spdk.sock 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 602889 ']' 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 602908 /var/tmp/spdk2.sock 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 602908 ']' 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.614 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.872 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.872 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:21.872 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:21.872 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:21.872 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:21.872 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:21.872 00:07:21.872 real 0m2.024s 00:07:21.872 user 0m1.153s 00:07:21.872 sys 0m0.178s 00:07:21.872 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.872 07:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.872 ************************************ 00:07:21.872 END TEST locking_overlapped_coremask_via_rpc 00:07:21.872 ************************************ 00:07:22.131 07:41:14 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:22.131 07:41:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 602889 ]] 00:07:22.131 07:41:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 602889 00:07:22.131 07:41:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 602889 ']' 00:07:22.131 07:41:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 602889 00:07:22.131 07:41:14 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:22.131 07:41:14 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.131 07:41:14 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 602889 00:07:22.131 07:41:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.131 07:41:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.131 07:41:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 602889' 00:07:22.131 killing process with pid 602889 00:07:22.131 07:41:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 602889 00:07:22.131 07:41:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 602889 00:07:22.389 07:41:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 602908 ]] 00:07:22.389 07:41:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 602908 00:07:22.389 07:41:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 602908 ']' 00:07:22.389 07:41:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 602908 00:07:22.389 07:41:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:22.389 07:41:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.389 07:41:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 602908 00:07:22.389 07:41:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:22.389 07:41:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:22.389 07:41:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 602908' 00:07:22.389 killing process with pid 602908 00:07:22.389 07:41:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 602908 00:07:22.389 07:41:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 602908 00:07:22.955 07:41:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:22.955 07:41:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:22.955 07:41:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 602889 ]] 00:07:22.955 07:41:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 602889 00:07:22.955 07:41:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 602889 ']' 00:07:22.955 07:41:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 602889 00:07:22.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (602889) - No such process 00:07:22.955 07:41:15 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 602889 is not found' 00:07:22.955 Process with pid 602889 is not found 00:07:22.955 07:41:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 602908 ]] 00:07:22.956 07:41:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 602908 00:07:22.956 07:41:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 602908 ']' 00:07:22.956 07:41:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 602908 00:07:22.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (602908) - No such process 00:07:22.956 07:41:15 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 602908 is not found' 00:07:22.956 Process with pid 602908 is not found 00:07:22.956 07:41:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:22.956 00:07:22.956 real 0m15.256s 00:07:22.956 user 0m27.872s 00:07:22.956 sys 0m5.292s 00:07:22.956 07:41:15 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.956 07:41:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.956 ************************************ 00:07:22.956 END TEST cpu_locks 00:07:22.956 ************************************ 00:07:22.956 00:07:22.956 real 0m40.386s 00:07:22.956 user 1m19.038s 00:07:22.956 sys 0m9.322s 00:07:22.956 07:41:15 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.956 07:41:15 event -- common/autotest_common.sh@10 -- # set +x 00:07:22.956 ************************************ 00:07:22.956 END TEST event 00:07:22.956 ************************************ 00:07:22.956 07:41:15 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:22.956 07:41:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.956 07:41:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.956 07:41:15 -- common/autotest_common.sh@10 -- # set +x 00:07:22.956 ************************************ 00:07:22.956 START TEST thread 00:07:22.956 ************************************ 00:07:22.956 07:41:15 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:22.956 * Looking for test storage... 00:07:22.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:22.956 07:41:15 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:22.956 07:41:15 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:22.956 07:41:15 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:23.214 07:41:16 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:23.214 07:41:16 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.215 07:41:16 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.215 07:41:16 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.215 07:41:16 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.215 07:41:16 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.215 07:41:16 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.215 07:41:16 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.215 07:41:16 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.215 07:41:16 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.215 07:41:16 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.215 07:41:16 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.215 07:41:16 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:23.215 07:41:16 thread -- scripts/common.sh@345 -- # : 1 00:07:23.215 07:41:16 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.215 07:41:16 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.215 07:41:16 thread -- scripts/common.sh@365 -- # decimal 1 00:07:23.215 07:41:16 thread -- scripts/common.sh@353 -- # local d=1 00:07:23.215 07:41:16 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.215 07:41:16 thread -- scripts/common.sh@355 -- # echo 1 00:07:23.215 07:41:16 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.215 07:41:16 thread -- scripts/common.sh@366 -- # decimal 2 00:07:23.215 07:41:16 thread -- scripts/common.sh@353 -- # local d=2 00:07:23.215 07:41:16 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.215 07:41:16 thread -- scripts/common.sh@355 -- # echo 2 00:07:23.215 07:41:16 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.215 07:41:16 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.215 07:41:16 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.215 07:41:16 thread -- scripts/common.sh@368 -- # return 0 00:07:23.215 07:41:16 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.215 07:41:16 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:23.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.215 --rc genhtml_branch_coverage=1 00:07:23.215 --rc genhtml_function_coverage=1 00:07:23.215 --rc genhtml_legend=1 00:07:23.215 --rc geninfo_all_blocks=1 00:07:23.215 --rc geninfo_unexecuted_blocks=1 00:07:23.215 00:07:23.215 ' 00:07:23.215 07:41:16 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:23.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.215 --rc genhtml_branch_coverage=1 00:07:23.215 --rc genhtml_function_coverage=1 00:07:23.215 --rc genhtml_legend=1 00:07:23.215 --rc geninfo_all_blocks=1 00:07:23.215 --rc geninfo_unexecuted_blocks=1 00:07:23.215 00:07:23.215 ' 00:07:23.215 07:41:16 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:23.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.215 --rc genhtml_branch_coverage=1 00:07:23.215 --rc genhtml_function_coverage=1 00:07:23.215 --rc genhtml_legend=1 00:07:23.215 --rc geninfo_all_blocks=1 00:07:23.215 --rc geninfo_unexecuted_blocks=1 00:07:23.215 00:07:23.215 ' 00:07:23.215 07:41:16 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:23.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.215 --rc genhtml_branch_coverage=1 00:07:23.215 --rc genhtml_function_coverage=1 00:07:23.215 --rc genhtml_legend=1 00:07:23.215 --rc geninfo_all_blocks=1 00:07:23.215 --rc geninfo_unexecuted_blocks=1 00:07:23.215 00:07:23.215 ' 00:07:23.215 07:41:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:23.215 07:41:16 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:23.215 07:41:16 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.215 07:41:16 thread -- common/autotest_common.sh@10 -- # set +x 00:07:23.215 ************************************ 00:07:23.215 START TEST thread_poller_perf 00:07:23.215 ************************************ 00:07:23.215 07:41:16 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:23.215 [2024-11-18 07:41:16.107820] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:23.215 [2024-11-18 07:41:16.107900] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid603287 ] 00:07:23.215 [2024-11-18 07:41:16.173950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.215 [2024-11-18 07:41:16.222246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.215 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:24.590 [2024-11-18T06:41:17.678Z] ====================================== 00:07:24.590 [2024-11-18T06:41:17.678Z] busy:2712942828 (cyc) 00:07:24.590 [2024-11-18T06:41:17.678Z] total_run_count: 367000 00:07:24.590 [2024-11-18T06:41:17.678Z] tsc_hz: 2700000000 (cyc) 00:07:24.590 [2024-11-18T06:41:17.678Z] ====================================== 00:07:24.590 [2024-11-18T06:41:17.678Z] poller_cost: 7392 (cyc), 2737 (nsec) 00:07:24.590 00:07:24.590 real 0m1.180s 00:07:24.590 user 0m1.116s 00:07:24.590 sys 0m0.059s 00:07:24.590 07:41:17 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.590 07:41:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:24.590 ************************************ 00:07:24.590 END TEST thread_poller_perf 00:07:24.590 ************************************ 00:07:24.590 07:41:17 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.590 07:41:17 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:24.590 07:41:17 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.590 07:41:17 thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.590 ************************************ 00:07:24.590 START TEST thread_poller_perf 00:07:24.590 ************************************ 00:07:24.590 07:41:17 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.590 [2024-11-18 07:41:17.333989] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:24.591 [2024-11-18 07:41:17.334056] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid603453 ] 00:07:24.591 [2024-11-18 07:41:17.401513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.591 [2024-11-18 07:41:17.446465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.591 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:25.525 [2024-11-18T06:41:18.613Z] ====================================== 00:07:25.525 [2024-11-18T06:41:18.613Z] busy:2702419830 (cyc) 00:07:25.525 [2024-11-18T06:41:18.613Z] total_run_count: 4879000 00:07:25.525 [2024-11-18T06:41:18.613Z] tsc_hz: 2700000000 (cyc) 00:07:25.525 [2024-11-18T06:41:18.613Z] ====================================== 00:07:25.525 [2024-11-18T06:41:18.613Z] poller_cost: 553 (cyc), 204 (nsec) 00:07:25.525 00:07:25.525 real 0m1.171s 00:07:25.525 user 0m1.098s 00:07:25.525 sys 0m0.068s 00:07:25.525 07:41:18 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.525 07:41:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:25.525 ************************************ 00:07:25.525 END TEST thread_poller_perf 00:07:25.525 ************************************ 00:07:25.525 07:41:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:25.525 00:07:25.525 real 0m2.593s 00:07:25.525 user 0m2.349s 00:07:25.525 sys 0m0.248s 00:07:25.525 07:41:18 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.525 07:41:18 thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.525 ************************************ 00:07:25.525 END TEST thread 00:07:25.525 ************************************ 00:07:25.525 07:41:18 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:25.525 07:41:18 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:25.525 07:41:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.525 07:41:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.525 07:41:18 -- common/autotest_common.sh@10 -- # set +x 00:07:25.525 ************************************ 00:07:25.525 START TEST app_cmdline 00:07:25.525 ************************************ 00:07:25.525 07:41:18 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:25.525 * Looking for test storage... 00:07:25.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:25.525 07:41:18 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:25.525 07:41:18 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:25.525 07:41:18 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:25.784 07:41:18 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:25.784 07:41:18 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.784 07:41:18 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.784 07:41:18 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.784 07:41:18 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.784 07:41:18 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.784 07:41:18 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.784 07:41:18 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.784 07:41:18 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.784 07:41:18 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.784 07:41:18 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.784 07:41:18 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.784 07:41:18 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:25.784 07:41:18 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:25.784 07:41:18 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.784 07:41:18 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.784 07:41:18 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:25.784 07:41:18 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:25.784 07:41:18 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.784 07:41:18 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:25.784 07:41:18 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.784 07:41:18 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:25.785 07:41:18 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:25.785 07:41:18 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.785 07:41:18 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:25.785 07:41:18 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.785 07:41:18 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.785 07:41:18 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.785 07:41:18 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:25.785 07:41:18 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.785 07:41:18 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:25.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.785 --rc genhtml_branch_coverage=1 00:07:25.785 --rc genhtml_function_coverage=1 00:07:25.785 --rc genhtml_legend=1 00:07:25.785 --rc geninfo_all_blocks=1 00:07:25.785 --rc geninfo_unexecuted_blocks=1 00:07:25.785 00:07:25.785 ' 00:07:25.785 07:41:18 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:25.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.785 --rc genhtml_branch_coverage=1 00:07:25.785 --rc genhtml_function_coverage=1 00:07:25.785 --rc genhtml_legend=1 00:07:25.785 --rc geninfo_all_blocks=1 00:07:25.785 --rc geninfo_unexecuted_blocks=1 00:07:25.785 00:07:25.785 ' 00:07:25.785 07:41:18 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:25.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.785 --rc genhtml_branch_coverage=1 00:07:25.785 --rc genhtml_function_coverage=1 00:07:25.785 --rc genhtml_legend=1 00:07:25.785 --rc geninfo_all_blocks=1 00:07:25.785 --rc geninfo_unexecuted_blocks=1 00:07:25.785 00:07:25.785 ' 00:07:25.785 07:41:18 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:25.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.785 --rc genhtml_branch_coverage=1 00:07:25.785 --rc genhtml_function_coverage=1 00:07:25.785 --rc genhtml_legend=1 00:07:25.785 --rc geninfo_all_blocks=1 00:07:25.785 --rc geninfo_unexecuted_blocks=1 00:07:25.785 00:07:25.785 ' 00:07:25.785 07:41:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:25.785 07:41:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=603769 00:07:25.785 07:41:18 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:25.785 07:41:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 603769 00:07:25.785 07:41:18 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 603769 ']' 00:07:25.785 07:41:18 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.785 07:41:18 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.785 07:41:18 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.785 07:41:18 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.785 07:41:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:25.785 [2024-11-18 07:41:18.751495] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:25.785 [2024-11-18 07:41:18.751610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid603769 ] 00:07:25.785 [2024-11-18 07:41:18.818395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.785 [2024-11-18 07:41:18.864323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.044 07:41:19 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.044 07:41:19 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:26.044 07:41:19 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:26.304 { 00:07:26.304 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:07:26.304 "fields": { 00:07:26.304 "major": 25, 00:07:26.304 "minor": 1, 00:07:26.304 "patch": 0, 00:07:26.304 "suffix": "-pre", 00:07:26.304 "commit": "83e8405e4" 00:07:26.304 } 00:07:26.304 } 00:07:26.304 07:41:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:26.304 07:41:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:26.304 07:41:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:26.304 07:41:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:26.304 07:41:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:26.304 07:41:19 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.304 07:41:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:26.304 07:41:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:26.304 07:41:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:26.304 07:41:19 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.562 07:41:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:26.562 07:41:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:26.562 07:41:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:26.562 07:41:19 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:26.562 07:41:19 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:26.562 07:41:19 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.562 07:41:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.562 07:41:19 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.562 07:41:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.562 07:41:19 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.562 07:41:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.562 07:41:19 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.562 07:41:19 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:26.562 07:41:19 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:26.820 request: 00:07:26.820 { 00:07:26.820 "method": "env_dpdk_get_mem_stats", 00:07:26.820 "req_id": 1 00:07:26.820 } 00:07:26.820 Got JSON-RPC error response 00:07:26.820 response: 00:07:26.820 { 00:07:26.820 "code": -32601, 00:07:26.820 "message": "Method not found" 00:07:26.820 } 00:07:26.820 07:41:19 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:26.820 07:41:19 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.820 07:41:19 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:26.820 07:41:19 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.820 07:41:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 603769 00:07:26.820 07:41:19 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 603769 ']' 00:07:26.820 07:41:19 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 603769 00:07:26.820 07:41:19 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:26.820 07:41:19 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.820 07:41:19 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 603769 00:07:26.820 07:41:19 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.820 07:41:19 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.820 07:41:19 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 603769' 00:07:26.820 killing process with pid 603769 00:07:26.820 07:41:19 app_cmdline -- common/autotest_common.sh@973 -- # kill 603769 00:07:26.820 07:41:19 app_cmdline -- common/autotest_common.sh@978 -- # wait 603769 00:07:27.078 00:07:27.079 real 0m1.567s 00:07:27.079 user 0m1.948s 00:07:27.079 sys 0m0.472s 00:07:27.079 07:41:20 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.079 07:41:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:27.079 ************************************ 00:07:27.079 END TEST app_cmdline 00:07:27.079 ************************************ 00:07:27.079 07:41:20 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:27.079 07:41:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.079 07:41:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.079 07:41:20 -- common/autotest_common.sh@10 -- # set +x 00:07:27.337 ************************************ 00:07:27.337 START TEST version 00:07:27.337 ************************************ 00:07:27.337 07:41:20 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:27.337 * Looking for test storage... 00:07:27.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:27.337 07:41:20 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.337 07:41:20 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.337 07:41:20 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.337 07:41:20 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.337 07:41:20 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.337 07:41:20 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.337 07:41:20 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.337 07:41:20 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.337 07:41:20 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.337 07:41:20 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.337 07:41:20 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.337 07:41:20 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.337 07:41:20 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.337 07:41:20 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.337 07:41:20 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.337 07:41:20 version -- scripts/common.sh@344 -- # case "$op" in 00:07:27.337 07:41:20 version -- scripts/common.sh@345 -- # : 1 00:07:27.337 07:41:20 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.337 07:41:20 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.337 07:41:20 version -- scripts/common.sh@365 -- # decimal 1 00:07:27.337 07:41:20 version -- scripts/common.sh@353 -- # local d=1 00:07:27.337 07:41:20 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.337 07:41:20 version -- scripts/common.sh@355 -- # echo 1 00:07:27.337 07:41:20 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.337 07:41:20 version -- scripts/common.sh@366 -- # decimal 2 00:07:27.337 07:41:20 version -- scripts/common.sh@353 -- # local d=2 00:07:27.337 07:41:20 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.337 07:41:20 version -- scripts/common.sh@355 -- # echo 2 00:07:27.337 07:41:20 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.337 07:41:20 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.337 07:41:20 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.337 07:41:20 version -- scripts/common.sh@368 -- # return 0 00:07:27.337 07:41:20 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.337 07:41:20 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:27.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.337 --rc genhtml_branch_coverage=1 00:07:27.337 --rc genhtml_function_coverage=1 00:07:27.337 --rc genhtml_legend=1 00:07:27.337 --rc geninfo_all_blocks=1 00:07:27.337 --rc geninfo_unexecuted_blocks=1 00:07:27.337 00:07:27.337 ' 00:07:27.337 07:41:20 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:27.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.337 --rc genhtml_branch_coverage=1 00:07:27.337 --rc genhtml_function_coverage=1 00:07:27.337 --rc genhtml_legend=1 00:07:27.337 --rc geninfo_all_blocks=1 00:07:27.337 --rc geninfo_unexecuted_blocks=1 00:07:27.337 00:07:27.337 ' 00:07:27.337 07:41:20 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:27.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.337 --rc genhtml_branch_coverage=1 00:07:27.337 --rc genhtml_function_coverage=1 00:07:27.337 --rc genhtml_legend=1 00:07:27.337 --rc geninfo_all_blocks=1 00:07:27.337 --rc geninfo_unexecuted_blocks=1 00:07:27.337 00:07:27.337 ' 00:07:27.337 07:41:20 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:27.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.337 --rc genhtml_branch_coverage=1 00:07:27.337 --rc genhtml_function_coverage=1 00:07:27.338 --rc genhtml_legend=1 00:07:27.338 --rc geninfo_all_blocks=1 00:07:27.338 --rc geninfo_unexecuted_blocks=1 00:07:27.338 00:07:27.338 ' 00:07:27.338 07:41:20 version -- app/version.sh@17 -- # get_header_version major 00:07:27.338 07:41:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:27.338 07:41:20 version -- app/version.sh@14 -- # cut -f2 00:07:27.338 07:41:20 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.338 07:41:20 version -- app/version.sh@17 -- # major=25 00:07:27.338 07:41:20 version -- app/version.sh@18 -- # get_header_version minor 00:07:27.338 07:41:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:27.338 07:41:20 version -- app/version.sh@14 -- # cut -f2 00:07:27.338 07:41:20 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.338 07:41:20 version -- app/version.sh@18 -- # minor=1 00:07:27.338 07:41:20 version -- app/version.sh@19 -- # get_header_version patch 00:07:27.338 07:41:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:27.338 07:41:20 version -- app/version.sh@14 -- # cut -f2 00:07:27.338 07:41:20 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.338 07:41:20 version -- app/version.sh@19 -- # patch=0 00:07:27.338 07:41:20 version -- app/version.sh@20 -- # get_header_version suffix 00:07:27.338 07:41:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:27.338 07:41:20 version -- app/version.sh@14 -- # cut -f2 00:07:27.338 07:41:20 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.338 07:41:20 version -- app/version.sh@20 -- # suffix=-pre 00:07:27.338 07:41:20 version -- app/version.sh@22 -- # version=25.1 00:07:27.338 07:41:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:27.338 07:41:20 version -- app/version.sh@28 -- # version=25.1rc0 00:07:27.338 07:41:20 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:27.338 07:41:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:27.338 07:41:20 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:27.338 07:41:20 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:27.338 00:07:27.338 real 0m0.202s 00:07:27.338 user 0m0.137s 00:07:27.338 sys 0m0.091s 00:07:27.338 07:41:20 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.338 07:41:20 version -- common/autotest_common.sh@10 -- # set +x 00:07:27.338 ************************************ 00:07:27.338 END TEST version 00:07:27.338 ************************************ 00:07:27.338 07:41:20 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:27.338 07:41:20 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:27.338 07:41:20 -- spdk/autotest.sh@194 -- # uname -s 00:07:27.338 07:41:20 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:27.338 07:41:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:27.338 07:41:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:27.338 07:41:20 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:27.338 07:41:20 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:27.338 07:41:20 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:27.338 07:41:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:27.338 07:41:20 -- common/autotest_common.sh@10 -- # set +x 00:07:27.596 07:41:20 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:27.596 07:41:20 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:27.596 07:41:20 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:27.596 07:41:20 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:27.596 07:41:20 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:27.596 07:41:20 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:27.596 07:41:20 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:27.596 07:41:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.596 07:41:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.596 07:41:20 -- common/autotest_common.sh@10 -- # set +x 00:07:27.596 ************************************ 00:07:27.596 START TEST nvmf_tcp 00:07:27.596 ************************************ 00:07:27.596 07:41:20 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:27.596 * Looking for test storage... 00:07:27.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:27.596 07:41:20 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.596 07:41:20 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.596 07:41:20 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.596 07:41:20 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.596 07:41:20 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:27.596 07:41:20 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.596 07:41:20 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:27.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.596 --rc genhtml_branch_coverage=1 00:07:27.596 --rc genhtml_function_coverage=1 00:07:27.596 --rc genhtml_legend=1 00:07:27.596 --rc geninfo_all_blocks=1 00:07:27.596 --rc geninfo_unexecuted_blocks=1 00:07:27.596 00:07:27.596 ' 00:07:27.597 07:41:20 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:27.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.597 --rc genhtml_branch_coverage=1 00:07:27.597 --rc genhtml_function_coverage=1 00:07:27.597 --rc genhtml_legend=1 00:07:27.597 --rc geninfo_all_blocks=1 00:07:27.597 --rc geninfo_unexecuted_blocks=1 00:07:27.597 00:07:27.597 ' 00:07:27.597 07:41:20 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:27.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.597 --rc genhtml_branch_coverage=1 00:07:27.597 --rc genhtml_function_coverage=1 00:07:27.597 --rc genhtml_legend=1 00:07:27.597 --rc geninfo_all_blocks=1 00:07:27.597 --rc geninfo_unexecuted_blocks=1 00:07:27.597 00:07:27.597 ' 00:07:27.597 07:41:20 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:27.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.597 --rc genhtml_branch_coverage=1 00:07:27.597 --rc genhtml_function_coverage=1 00:07:27.597 --rc genhtml_legend=1 00:07:27.597 --rc geninfo_all_blocks=1 00:07:27.597 --rc geninfo_unexecuted_blocks=1 00:07:27.597 00:07:27.597 ' 00:07:27.597 07:41:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:27.597 07:41:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:27.597 07:41:20 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:27.597 07:41:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.597 07:41:20 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.597 07:41:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:27.597 ************************************ 00:07:27.597 START TEST nvmf_target_core 00:07:27.597 ************************************ 00:07:27.597 07:41:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:27.597 * Looking for test storage... 00:07:27.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:27.854 07:41:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.854 07:41:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.854 07:41:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.854 07:41:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.854 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.854 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.854 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.854 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.854 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.854 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.854 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.854 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.854 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.854 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.854 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.854 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:27.854 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:27.854 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.854 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.854 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:27.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.855 --rc genhtml_branch_coverage=1 00:07:27.855 --rc genhtml_function_coverage=1 00:07:27.855 --rc genhtml_legend=1 00:07:27.855 --rc geninfo_all_blocks=1 00:07:27.855 --rc geninfo_unexecuted_blocks=1 00:07:27.855 00:07:27.855 ' 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:27.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.855 --rc genhtml_branch_coverage=1 00:07:27.855 --rc genhtml_function_coverage=1 00:07:27.855 --rc genhtml_legend=1 00:07:27.855 --rc geninfo_all_blocks=1 00:07:27.855 --rc geninfo_unexecuted_blocks=1 00:07:27.855 00:07:27.855 ' 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:27.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.855 --rc genhtml_branch_coverage=1 00:07:27.855 --rc genhtml_function_coverage=1 00:07:27.855 --rc genhtml_legend=1 00:07:27.855 --rc geninfo_all_blocks=1 00:07:27.855 --rc geninfo_unexecuted_blocks=1 00:07:27.855 00:07:27.855 ' 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:27.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.855 --rc genhtml_branch_coverage=1 00:07:27.855 --rc genhtml_function_coverage=1 00:07:27.855 --rc genhtml_legend=1 00:07:27.855 --rc geninfo_all_blocks=1 00:07:27.855 --rc geninfo_unexecuted_blocks=1 00:07:27.855 00:07:27.855 ' 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:27.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:27.855 ************************************ 00:07:27.855 START TEST nvmf_abort 00:07:27.855 ************************************ 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:27.855 * Looking for test storage... 00:07:27.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.855 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:28.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.115 --rc genhtml_branch_coverage=1 00:07:28.115 --rc genhtml_function_coverage=1 00:07:28.115 --rc genhtml_legend=1 00:07:28.115 --rc geninfo_all_blocks=1 00:07:28.115 --rc geninfo_unexecuted_blocks=1 00:07:28.115 00:07:28.115 ' 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:28.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.115 --rc genhtml_branch_coverage=1 00:07:28.115 --rc genhtml_function_coverage=1 00:07:28.115 --rc genhtml_legend=1 00:07:28.115 --rc geninfo_all_blocks=1 00:07:28.115 --rc geninfo_unexecuted_blocks=1 00:07:28.115 00:07:28.115 ' 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:28.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.115 --rc genhtml_branch_coverage=1 00:07:28.115 --rc genhtml_function_coverage=1 00:07:28.115 --rc genhtml_legend=1 00:07:28.115 --rc geninfo_all_blocks=1 00:07:28.115 --rc geninfo_unexecuted_blocks=1 00:07:28.115 00:07:28.115 ' 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:28.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.115 --rc genhtml_branch_coverage=1 00:07:28.115 --rc genhtml_function_coverage=1 00:07:28.115 --rc genhtml_legend=1 00:07:28.115 --rc geninfo_all_blocks=1 00:07:28.115 --rc geninfo_unexecuted_blocks=1 00:07:28.115 00:07:28.115 ' 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:28.115 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:28.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:28.116 07:41:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:30.020 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:30.020 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:30.020 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:30.020 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:30.020 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:30.021 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:30.021 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:30.021 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:30.021 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:30.021 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.021 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.021 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:30.021 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:30.021 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:30.021 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:30.021 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:30.021 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:30.021 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:30.021 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.021 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:30.021 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:30.021 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:30.021 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:30.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:07:30.279 00:07:30.279 --- 10.0.0.2 ping statistics --- 00:07:30.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.279 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:30.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:07:30.279 00:07:30.279 --- 10.0.0.1 ping statistics --- 00:07:30.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.279 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=605852 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 605852 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 605852 ']' 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.279 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.279 [2024-11-18 07:41:23.276664] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:30.279 [2024-11-18 07:41:23.276731] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.279 [2024-11-18 07:41:23.346091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.538 [2024-11-18 07:41:23.392469] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.538 [2024-11-18 07:41:23.392542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.538 [2024-11-18 07:41:23.392565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.538 [2024-11-18 07:41:23.392576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.538 [2024-11-18 07:41:23.392585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.538 [2024-11-18 07:41:23.393943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.538 [2024-11-18 07:41:23.394001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.538 [2024-11-18 07:41:23.394004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.538 [2024-11-18 07:41:23.537973] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.538 Malloc0 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.538 Delay0 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.538 [2024-11-18 07:41:23.615603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.538 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.796 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.796 07:41:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:30.796 [2024-11-18 07:41:23.730427] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:33.324 Initializing NVMe Controllers 00:07:33.324 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:33.324 controller IO queue size 128 less than required 00:07:33.324 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:33.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:33.324 Initialization complete. Launching workers. 00:07:33.324 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 27655 00:07:33.324 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27720, failed to submit 62 00:07:33.324 success 27659, unsuccessful 61, failed 0 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:33.324 rmmod nvme_tcp 00:07:33.324 rmmod nvme_fabrics 00:07:33.324 rmmod nvme_keyring 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 605852 ']' 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 605852 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 605852 ']' 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 605852 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 605852 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 605852' 00:07:33.324 killing process with pid 605852 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 605852 00:07:33.324 07:41:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 605852 00:07:33.324 07:41:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:33.324 07:41:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:33.324 07:41:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:33.324 07:41:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:33.324 07:41:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:33.324 07:41:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:33.324 07:41:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:33.324 07:41:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:33.324 07:41:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:33.324 07:41:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.324 07:41:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.324 07:41:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.234 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:35.234 00:07:35.234 real 0m7.373s 00:07:35.234 user 0m10.617s 00:07:35.234 sys 0m2.579s 00:07:35.234 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.234 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.234 ************************************ 00:07:35.234 END TEST nvmf_abort 00:07:35.234 ************************************ 00:07:35.234 07:41:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:35.234 07:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:35.234 07:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.234 07:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:35.234 ************************************ 00:07:35.234 START TEST nvmf_ns_hotplug_stress 00:07:35.234 ************************************ 00:07:35.234 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:35.234 * Looking for test storage... 00:07:35.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.234 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:35.234 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:35.234 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:35.493 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:35.493 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.493 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.493 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.493 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.493 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.493 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.493 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.493 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:35.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.494 --rc genhtml_branch_coverage=1 00:07:35.494 --rc genhtml_function_coverage=1 00:07:35.494 --rc genhtml_legend=1 00:07:35.494 --rc geninfo_all_blocks=1 00:07:35.494 --rc geninfo_unexecuted_blocks=1 00:07:35.494 00:07:35.494 ' 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:35.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.494 --rc genhtml_branch_coverage=1 00:07:35.494 --rc genhtml_function_coverage=1 00:07:35.494 --rc genhtml_legend=1 00:07:35.494 --rc geninfo_all_blocks=1 00:07:35.494 --rc geninfo_unexecuted_blocks=1 00:07:35.494 00:07:35.494 ' 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:35.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.494 --rc genhtml_branch_coverage=1 00:07:35.494 --rc genhtml_function_coverage=1 00:07:35.494 --rc genhtml_legend=1 00:07:35.494 --rc geninfo_all_blocks=1 00:07:35.494 --rc geninfo_unexecuted_blocks=1 00:07:35.494 00:07:35.494 ' 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:35.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.494 --rc genhtml_branch_coverage=1 00:07:35.494 --rc genhtml_function_coverage=1 00:07:35.494 --rc genhtml_legend=1 00:07:35.494 --rc geninfo_all_blocks=1 00:07:35.494 --rc geninfo_unexecuted_blocks=1 00:07:35.494 00:07:35.494 ' 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.494 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:35.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:35.495 07:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:37.402 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.402 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:37.402 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:37.402 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:37.402 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:37.402 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:37.402 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:37.402 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:37.402 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:37.402 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:37.402 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:37.403 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:37.403 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:37.403 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:37.403 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:37.403 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:37.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:07:37.404 00:07:37.404 --- 10.0.0.2 ping statistics --- 00:07:37.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.404 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:07:37.404 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:07:37.404 00:07:37.404 --- 10.0.0.1 ping statistics --- 00:07:37.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.404 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:07:37.404 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.404 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:37.404 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:37.404 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.404 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:37.404 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:37.404 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.404 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:37.404 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:37.662 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:37.662 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:37.662 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.662 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:37.662 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=608086 00:07:37.662 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:37.662 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 608086 00:07:37.662 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 608086 ']' 00:07:37.662 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.662 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.662 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.662 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.662 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:37.662 [2024-11-18 07:41:30.555479] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:37.662 [2024-11-18 07:41:30.555592] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.662 [2024-11-18 07:41:30.631655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:37.662 [2024-11-18 07:41:30.676680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.662 [2024-11-18 07:41:30.676738] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.662 [2024-11-18 07:41:30.676759] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.662 [2024-11-18 07:41:30.676784] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.662 [2024-11-18 07:41:30.676793] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.662 [2024-11-18 07:41:30.678284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.662 [2024-11-18 07:41:30.678386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.662 [2024-11-18 07:41:30.678395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.921 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.921 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:37.921 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:37.921 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:37.921 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:37.921 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.921 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:37.921 07:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:38.179 [2024-11-18 07:41:31.081052] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.179 07:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:38.437 07:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.695 [2024-11-18 07:41:31.607835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.695 07:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:38.952 07:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:39.211 Malloc0 00:07:39.211 07:41:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:39.468 Delay0 00:07:39.468 07:41:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.726 07:41:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:39.984 NULL1 00:07:39.984 07:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:40.242 07:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=608512 00:07:40.242 07:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:40.242 07:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:07:40.242 07:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.614 Read completed with error (sct=0, sc=11) 00:07:41.615 07:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.872 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.872 07:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:41.872 07:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:42.130 true 00:07:42.130 07:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:07:42.131 07:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.697 07:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.262 07:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:43.262 07:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:43.262 true 00:07:43.262 07:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:07:43.262 07:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.521 07:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.086 07:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:44.086 07:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:44.086 true 00:07:44.086 07:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:07:44.086 07:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.344 07:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.602 07:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:44.602 07:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:44.860 true 00:07:45.118 07:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:07:45.118 07:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.050 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.050 07:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.050 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.309 07:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:46.309 07:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:46.567 true 00:07:46.567 07:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:07:46.567 07:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.825 07:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.083 07:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:47.083 07:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:47.341 true 00:07:47.341 07:41:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:07:47.341 07:41:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.599 07:41:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.858 07:41:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:47.858 07:41:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:48.115 true 00:07:48.115 07:41:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:07:48.115 07:41:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.049 07:41:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.307 07:41:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:49.307 07:41:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:49.564 true 00:07:49.564 07:41:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:07:49.565 07:41:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.822 07:41:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.079 07:41:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:50.079 07:41:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:50.337 true 00:07:50.337 07:41:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:07:50.337 07:41:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.594 07:41:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.852 07:41:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:50.852 07:41:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:51.110 true 00:07:51.110 07:41:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:07:51.110 07:41:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.043 07:41:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.300 07:41:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:52.300 07:41:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:52.557 true 00:07:52.557 07:41:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:07:52.557 07:41:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.815 07:41:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.073 07:41:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:53.073 07:41:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:53.330 true 00:07:53.330 07:41:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:07:53.330 07:41:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.896 07:41:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.896 07:41:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:53.896 07:41:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:54.154 true 00:07:54.154 07:41:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:07:54.154 07:41:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.526 07:41:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.526 07:41:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:55.526 07:41:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:55.783 true 00:07:55.783 07:41:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:07:55.783 07:41:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.041 07:41:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.297 07:41:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:56.297 07:41:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:56.553 true 00:07:56.553 07:41:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:07:56.553 07:41:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.811 07:41:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.069 07:41:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:57.069 07:41:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:57.327 true 00:07:57.327 07:41:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:07:57.327 07:41:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.260 07:41:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.825 07:41:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:58.825 07:41:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:58.825 true 00:07:58.825 07:41:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:07:58.825 07:41:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.083 07:41:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.340 07:41:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:59.340 07:41:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:59.598 true 00:07:59.598 07:41:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:07:59.598 07:41:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.164 07:41:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.164 07:41:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:00.164 07:41:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:00.421 true 00:08:00.421 07:41:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:08:00.421 07:41:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.793 07:41:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.793 07:41:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:01.793 07:41:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:02.050 true 00:08:02.050 07:41:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:08:02.050 07:41:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.308 07:41:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.566 07:41:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:02.566 07:41:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:02.824 true 00:08:02.824 07:41:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:08:02.824 07:41:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.081 07:41:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.338 07:41:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:03.338 07:41:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:03.595 true 00:08:03.595 07:41:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:08:03.595 07:41:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.978 07:41:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.978 07:41:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:04.978 07:41:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:05.235 true 00:08:05.235 07:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:08:05.235 07:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.493 07:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.751 07:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:05.751 07:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:06.008 true 00:08:06.008 07:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:08:06.008 07:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.266 07:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.523 07:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:06.523 07:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:06.781 true 00:08:06.781 07:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:08:06.781 07:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.786 07:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.088 07:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:08.088 07:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:08.345 true 00:08:08.345 07:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:08:08.345 07:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.602 07:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.859 07:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:08.859 07:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:09.117 true 00:08:09.408 07:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:08:09.409 07:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.667 07:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.925 07:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:09.925 07:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:10.182 true 00:08:10.182 07:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:08:10.182 07:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.116 Initializing NVMe Controllers 00:08:11.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:11.116 Controller IO queue size 128, less than required. 00:08:11.116 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:11.116 Controller IO queue size 128, less than required. 00:08:11.116 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:11.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:11.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:11.116 Initialization complete. Launching workers. 00:08:11.116 ======================================================== 00:08:11.116 Latency(us) 00:08:11.116 Device Information : IOPS MiB/s Average min max 00:08:11.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 451.67 0.22 114351.29 3794.95 1012362.97 00:08:11.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8271.81 4.04 15474.36 3284.65 455293.57 00:08:11.116 ======================================================== 00:08:11.116 Total : 8723.48 4.26 20593.80 3284.65 1012362.97 00:08:11.116 00:08:11.116 07:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.373 07:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:11.373 07:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:11.632 true 00:08:11.632 07:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 608512 00:08:11.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (608512) - No such process 00:08:11.632 07:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 608512 00:08:11.632 07:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.890 07:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.147 07:42:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:12.147 07:42:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:12.147 07:42:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:12.147 07:42:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:12.147 07:42:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:12.405 null0 00:08:12.405 07:42:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:12.405 07:42:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:12.405 07:42:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:12.662 null1 00:08:12.662 07:42:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:12.662 07:42:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:12.662 07:42:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:12.920 null2 00:08:12.920 07:42:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:12.920 07:42:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:12.920 07:42:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:13.177 null3 00:08:13.177 07:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.177 07:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.177 07:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:13.435 null4 00:08:13.435 07:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.435 07:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.435 07:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:13.693 null5 00:08:13.693 07:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.693 07:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.693 07:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:13.951 null6 00:08:13.951 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.951 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.951 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:14.209 null7 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.468 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 612874 612875 612877 612879 612881 612883 612885 612887 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.469 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.727 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.727 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.727 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:14.727 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.727 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.727 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.727 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.727 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.985 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.985 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.985 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.985 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.985 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.986 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.986 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.986 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.986 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.986 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.986 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.986 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.986 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.986 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.986 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.986 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.986 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.986 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.986 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.986 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.986 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.986 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.986 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.986 07:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.244 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.244 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.244 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.244 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.244 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.244 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.244 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.244 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.502 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.760 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.760 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.760 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.760 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.760 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.760 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.760 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.760 07:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.325 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.326 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.326 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.326 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.583 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.583 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.583 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.583 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.583 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.583 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.583 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.842 07:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.100 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.100 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.100 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.100 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.100 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.100 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.100 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.100 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.357 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.615 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.615 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.615 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.615 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.615 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.615 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.615 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.615 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.873 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.873 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.873 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.873 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.873 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.873 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.873 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.873 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.873 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.873 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.874 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.874 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.874 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.874 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.874 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.874 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.874 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.131 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:18.131 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.131 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.131 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:18.131 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.131 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.131 07:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:18.389 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.389 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.389 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.389 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.389 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.389 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.389 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.389 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.647 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:18.905 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.905 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.905 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.905 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.905 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.905 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.905 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.905 07:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.163 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.163 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.163 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.163 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.163 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.163 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.163 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.163 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.163 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.163 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.163 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.163 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.163 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.163 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.163 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.163 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.163 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.163 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.163 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.163 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.163 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.164 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.164 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.164 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.421 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.421 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.421 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:19.421 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.421 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.421 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.421 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.422 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.680 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.937 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.937 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.937 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.937 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.937 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.937 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.937 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.937 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.938 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.938 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.938 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.938 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.938 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.938 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.938 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.938 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.938 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.938 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.938 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.938 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.938 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.938 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.938 07:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:20.195 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:20.195 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:20.195 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.195 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:20.195 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.195 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:20.195 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:20.195 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:20.453 rmmod nvme_tcp 00:08:20.453 rmmod nvme_fabrics 00:08:20.453 rmmod nvme_keyring 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 608086 ']' 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 608086 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 608086 ']' 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 608086 00:08:20.453 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:20.454 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.454 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 608086 00:08:20.454 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:20.454 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:20.454 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 608086' 00:08:20.454 killing process with pid 608086 00:08:20.454 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 608086 00:08:20.454 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 608086 00:08:20.713 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:20.713 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:20.713 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:20.713 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:20.713 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:20.713 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:20.713 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:20.713 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:20.713 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:20.713 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.713 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.713 07:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.250 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:23.250 00:08:23.250 real 0m47.512s 00:08:23.250 user 3m42.233s 00:08:23.250 sys 0m15.441s 00:08:23.250 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.250 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.250 ************************************ 00:08:23.250 END TEST nvmf_ns_hotplug_stress 00:08:23.250 ************************************ 00:08:23.250 07:42:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:23.250 07:42:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:23.250 07:42:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.250 07:42:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:23.250 ************************************ 00:08:23.250 START TEST nvmf_delete_subsystem 00:08:23.250 ************************************ 00:08:23.250 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:23.250 * Looking for test storage... 00:08:23.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.250 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:23.250 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:23.250 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:23.250 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:23.250 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.250 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:23.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.251 --rc genhtml_branch_coverage=1 00:08:23.251 --rc genhtml_function_coverage=1 00:08:23.251 --rc genhtml_legend=1 00:08:23.251 --rc geninfo_all_blocks=1 00:08:23.251 --rc geninfo_unexecuted_blocks=1 00:08:23.251 00:08:23.251 ' 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:23.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.251 --rc genhtml_branch_coverage=1 00:08:23.251 --rc genhtml_function_coverage=1 00:08:23.251 --rc genhtml_legend=1 00:08:23.251 --rc geninfo_all_blocks=1 00:08:23.251 --rc geninfo_unexecuted_blocks=1 00:08:23.251 00:08:23.251 ' 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:23.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.251 --rc genhtml_branch_coverage=1 00:08:23.251 --rc genhtml_function_coverage=1 00:08:23.251 --rc genhtml_legend=1 00:08:23.251 --rc geninfo_all_blocks=1 00:08:23.251 --rc geninfo_unexecuted_blocks=1 00:08:23.251 00:08:23.251 ' 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:23.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.251 --rc genhtml_branch_coverage=1 00:08:23.251 --rc genhtml_function_coverage=1 00:08:23.251 --rc genhtml_legend=1 00:08:23.251 --rc geninfo_all_blocks=1 00:08:23.251 --rc geninfo_unexecuted_blocks=1 00:08:23.251 00:08:23.251 ' 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.251 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.252 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.252 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:23.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:23.252 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:23.252 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:23.252 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:23.252 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:23.252 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:23.252 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.252 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:23.252 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:23.252 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:23.252 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.252 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.252 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.252 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:23.252 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:23.252 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:23.252 07:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:25.157 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:25.157 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:25.158 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:25.158 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:25.158 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:25.158 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:25.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:08:25.417 00:08:25.417 --- 10.0.0.2 ping statistics --- 00:08:25.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.417 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:08:25.417 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:08:25.417 00:08:25.417 --- 10.0.0.1 ping statistics --- 00:08:25.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.417 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:08:25.417 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.417 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:25.417 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:25.417 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.417 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:25.417 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:25.417 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.417 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:25.417 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:25.417 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:25.417 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:25.417 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:25.417 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.417 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=616127 00:08:25.417 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:25.417 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 616127 00:08:25.417 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 616127 ']' 00:08:25.417 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.417 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.417 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.418 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.418 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.418 [2024-11-18 07:42:18.333614] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:25.418 [2024-11-18 07:42:18.333696] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.418 [2024-11-18 07:42:18.405962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:25.418 [2024-11-18 07:42:18.453411] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.418 [2024-11-18 07:42:18.453473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.418 [2024-11-18 07:42:18.453485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.418 [2024-11-18 07:42:18.453519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.418 [2024-11-18 07:42:18.453530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.418 [2024-11-18 07:42:18.454968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.418 [2024-11-18 07:42:18.454973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.676 [2024-11-18 07:42:18.601008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.676 [2024-11-18 07:42:18.617222] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.676 NULL1 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.676 Delay0 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=616155 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:25.676 07:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:25.676 [2024-11-18 07:42:18.702013] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:27.582 07:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:27.582 07:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.582 07:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 starting I/O failed: -6 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Write completed with error (sct=0, sc=8) 00:08:27.840 [2024-11-18 07:42:20.864757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd2c0000c40 is same with the state(6) to be set 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.840 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Read completed with error (sct=0, sc=8) 00:08:27.841 Write completed with error (sct=0, sc=8) 00:08:28.774 [2024-11-18 07:42:21.838931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd37190 is same with the state(6) to be set 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 [2024-11-18 07:42:21.866662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd2c000d7e0 is same with the state(6) to be set 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 [2024-11-18 07:42:21.866853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd2c000d020 is same with the state(6) to be set 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 [2024-11-18 07:42:21.867288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd38f70 is same with the state(6) to be set 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Write completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 Read completed with error (sct=0, sc=8) 00:08:29.032 [2024-11-18 07:42:21.867913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39330 is same with the state(6) to be set 00:08:29.032 Initializing NVMe Controllers 00:08:29.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:29.032 Controller IO queue size 128, less than required. 00:08:29.032 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:29.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:29.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:29.032 Initialization complete. Launching workers. 00:08:29.032 ======================================================== 00:08:29.032 Latency(us) 00:08:29.032 Device Information : IOPS MiB/s Average min max 00:08:29.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.26 0.08 912294.23 567.82 2003085.21 00:08:29.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.32 0.08 928964.43 556.35 2002058.86 00:08:29.032 ======================================================== 00:08:29.032 Total : 331.58 0.16 920404.73 556.35 2003085.21 00:08:29.032 00:08:29.032 [2024-11-18 07:42:21.868438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd37190 (9): Bad file descriptor 00:08:29.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:29.032 07:42:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.032 07:42:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:29.032 07:42:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 616155 00:08:29.032 07:42:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:29.290 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:29.290 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 616155 00:08:29.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (616155) - No such process 00:08:29.290 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 616155 00:08:29.290 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:29.290 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 616155 00:08:29.290 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:29.290 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.290 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:29.290 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.290 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 616155 00:08:29.290 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:29.290 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:29.290 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:29.290 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:29.290 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:29.290 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.548 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.548 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.548 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:29.548 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.548 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.548 [2024-11-18 07:42:22.392735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.548 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.548 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.548 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.548 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.548 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.548 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=616637 00:08:29.548 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:29.548 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:29.548 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 616637 00:08:29.548 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:29.548 [2024-11-18 07:42:22.465690] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:30.113 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:30.113 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 616637 00:08:30.113 07:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:30.407 07:42:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:30.407 07:42:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 616637 00:08:30.407 07:42:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:31.028 07:42:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:31.028 07:42:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 616637 00:08:31.028 07:42:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:31.591 07:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:31.591 07:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 616637 00:08:31.591 07:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:31.848 07:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:31.848 07:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 616637 00:08:31.848 07:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:32.413 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:32.413 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 616637 00:08:32.413 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:32.671 Initializing NVMe Controllers 00:08:32.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:32.671 Controller IO queue size 128, less than required. 00:08:32.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:32.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:32.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:32.672 Initialization complete. Launching workers. 00:08:32.672 ======================================================== 00:08:32.672 Latency(us) 00:08:32.672 Device Information : IOPS MiB/s Average min max 00:08:32.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003698.22 1000164.05 1011602.08 00:08:32.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005097.46 1000145.42 1041451.77 00:08:32.672 ======================================================== 00:08:32.672 Total : 256.00 0.12 1004397.84 1000145.42 1041451.77 00:08:32.672 00:08:32.937 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:32.937 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 616637 00:08:32.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (616637) - No such process 00:08:32.937 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 616637 00:08:32.937 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:32.937 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:32.937 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:32.937 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:32.937 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:32.937 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:32.937 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:32.937 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:32.937 rmmod nvme_tcp 00:08:32.937 rmmod nvme_fabrics 00:08:32.937 rmmod nvme_keyring 00:08:32.937 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:32.937 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:32.937 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:32.937 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 616127 ']' 00:08:32.937 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 616127 00:08:32.937 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 616127 ']' 00:08:32.937 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 616127 00:08:32.937 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:32.937 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.937 07:42:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 616127 00:08:32.937 07:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:32.937 07:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:32.937 07:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 616127' 00:08:32.937 killing process with pid 616127 00:08:32.937 07:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 616127 00:08:32.937 07:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 616127 00:08:33.195 07:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:33.195 07:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:33.195 07:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:33.195 07:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:33.195 07:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:33.195 07:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:33.195 07:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:33.195 07:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:33.195 07:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:33.195 07:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.195 07:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.195 07:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:35.740 00:08:35.740 real 0m12.439s 00:08:35.740 user 0m28.048s 00:08:35.740 sys 0m2.967s 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.740 ************************************ 00:08:35.740 END TEST nvmf_delete_subsystem 00:08:35.740 ************************************ 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:35.740 ************************************ 00:08:35.740 START TEST nvmf_host_management 00:08:35.740 ************************************ 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:35.740 * Looking for test storage... 00:08:35.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:35.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.740 --rc genhtml_branch_coverage=1 00:08:35.740 --rc genhtml_function_coverage=1 00:08:35.740 --rc genhtml_legend=1 00:08:35.740 --rc geninfo_all_blocks=1 00:08:35.740 --rc geninfo_unexecuted_blocks=1 00:08:35.740 00:08:35.740 ' 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:35.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.740 --rc genhtml_branch_coverage=1 00:08:35.740 --rc genhtml_function_coverage=1 00:08:35.740 --rc genhtml_legend=1 00:08:35.740 --rc geninfo_all_blocks=1 00:08:35.740 --rc geninfo_unexecuted_blocks=1 00:08:35.740 00:08:35.740 ' 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:35.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.740 --rc genhtml_branch_coverage=1 00:08:35.740 --rc genhtml_function_coverage=1 00:08:35.740 --rc genhtml_legend=1 00:08:35.740 --rc geninfo_all_blocks=1 00:08:35.740 --rc geninfo_unexecuted_blocks=1 00:08:35.740 00:08:35.740 ' 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:35.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.740 --rc genhtml_branch_coverage=1 00:08:35.740 --rc genhtml_function_coverage=1 00:08:35.740 --rc genhtml_legend=1 00:08:35.740 --rc geninfo_all_blocks=1 00:08:35.740 --rc geninfo_unexecuted_blocks=1 00:08:35.740 00:08:35.740 ' 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.740 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:35.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:35.741 07:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:37.653 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:37.653 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:37.653 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:37.653 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:37.653 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:37.654 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:37.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:08:37.913 00:08:37.913 --- 10.0.0.2 ping statistics --- 00:08:37.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.913 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:37.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:08:37.913 00:08:37.913 --- 10.0.0.1 ping statistics --- 00:08:37.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.913 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=619038 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 619038 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 619038 ']' 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.913 07:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.913 [2024-11-18 07:42:30.850566] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:37.913 [2024-11-18 07:42:30.850671] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.913 [2024-11-18 07:42:30.928627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:37.913 [2024-11-18 07:42:30.975142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.913 [2024-11-18 07:42:30.975218] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.913 [2024-11-18 07:42:30.975242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.913 [2024-11-18 07:42:30.975253] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.913 [2024-11-18 07:42:30.975263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.913 [2024-11-18 07:42:30.976951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.913 [2024-11-18 07:42:30.977015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.913 [2024-11-18 07:42:30.977084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.913 [2024-11-18 07:42:30.977082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.172 [2024-11-18 07:42:31.122784] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.172 Malloc0 00:08:38.172 [2024-11-18 07:42:31.205460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=619086 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 619086 /var/tmp/bdevperf.sock 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 619086 ']' 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:38.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:38.172 { 00:08:38.172 "params": { 00:08:38.172 "name": "Nvme$subsystem", 00:08:38.172 "trtype": "$TEST_TRANSPORT", 00:08:38.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:38.172 "adrfam": "ipv4", 00:08:38.172 "trsvcid": "$NVMF_PORT", 00:08:38.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:38.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:38.172 "hdgst": ${hdgst:-false}, 00:08:38.172 "ddgst": ${ddgst:-false} 00:08:38.172 }, 00:08:38.172 "method": "bdev_nvme_attach_controller" 00:08:38.172 } 00:08:38.172 EOF 00:08:38.172 )") 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:38.172 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:38.172 "params": { 00:08:38.172 "name": "Nvme0", 00:08:38.172 "trtype": "tcp", 00:08:38.172 "traddr": "10.0.0.2", 00:08:38.172 "adrfam": "ipv4", 00:08:38.172 "trsvcid": "4420", 00:08:38.172 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:38.172 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:38.172 "hdgst": false, 00:08:38.172 "ddgst": false 00:08:38.172 }, 00:08:38.172 "method": "bdev_nvme_attach_controller" 00:08:38.172 }' 00:08:38.431 [2024-11-18 07:42:31.290369] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:38.431 [2024-11-18 07:42:31.290468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619086 ] 00:08:38.431 [2024-11-18 07:42:31.363285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.431 [2024-11-18 07:42:31.410253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.689 Running I/O for 10 seconds... 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:38.948 07:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:39.216 07:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:39.216 07:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:39.216 07:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:39.217 07:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.217 07:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:39.217 07:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.217 07:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.217 07:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:08:39.217 07:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:08:39.217 07:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:39.217 07:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:39.217 07:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:39.217 07:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:39.217 07:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.217 07:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.217 [2024-11-18 07:42:32.140525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.140990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.141001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.141012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.141024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.141035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.141047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.141059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.141071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.141085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.141113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.141124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.141136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.141147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a5b0 is same with the state(6) to be set 00:08:39.217 [2024-11-18 07:42:32.143772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.217 [2024-11-18 07:42:32.143841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.217 [2024-11-18 07:42:32.143879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.217 [2024-11-18 07:42:32.143894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.217 [2024-11-18 07:42:32.143910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.217 [2024-11-18 07:42:32.143930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.217 [2024-11-18 07:42:32.143947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.217 [2024-11-18 07:42:32.143961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.217 [2024-11-18 07:42:32.143976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.217 [2024-11-18 07:42:32.143989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.217 [2024-11-18 07:42:32.144004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.217 [2024-11-18 07:42:32.144017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.217 [2024-11-18 07:42:32.144033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.217 [2024-11-18 07:42:32.144046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.217 [2024-11-18 07:42:32.144061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.217 [2024-11-18 07:42:32.144075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.217 [2024-11-18 07:42:32.144090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.217 [2024-11-18 07:42:32.144104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.217 [2024-11-18 07:42:32.144119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.217 [2024-11-18 07:42:32.144133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.217 [2024-11-18 07:42:32.144148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.217 [2024-11-18 07:42:32.144162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.217 [2024-11-18 07:42:32.144177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.217 [2024-11-18 07:42:32.144191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.217 [2024-11-18 07:42:32.144206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.217 [2024-11-18 07:42:32.144220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.217 [2024-11-18 07:42:32.144234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.217 [2024-11-18 07:42:32.144248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.217 [2024-11-18 07:42:32.144263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.217 [2024-11-18 07:42:32.144292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.217 [2024-11-18 07:42:32.144312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.217 [2024-11-18 07:42:32.144327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.217 [2024-11-18 07:42:32.144343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.144358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.144374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.144388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.144404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.144418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.144434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.144448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.144464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.144485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.144512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.144528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.144544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.144558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.144574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.144588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.144604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.144619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.144634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.144648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.144663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.144678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.144694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.144712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.144728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.144742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.144758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.144772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.144796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.144810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.144826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.144840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.144856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.144871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.144886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 07:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.218 [2024-11-18 07:42:32.144906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.144922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.144937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.144953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.144967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.144984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.144999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.145015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.145029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.145045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 07:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:39.218 [2024-11-18 07:42:32.145059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.145078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.145093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.145108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.145122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.145138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.145152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.145168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 07:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.218 [2024-11-18 07:42:32.145183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.145200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.145215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.145230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.145244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.145260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.145274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.145290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:1 07:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.218 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.145306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.145322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.145336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.145352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.145366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.145383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.145398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.145414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.145429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.145448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.145462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.145496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.145512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.145528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.145543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.145559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.145573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.145588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.145603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.218 [2024-11-18 07:42:32.145618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.218 [2024-11-18 07:42:32.145632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.219 [2024-11-18 07:42:32.145648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.219 [2024-11-18 07:42:32.145662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.219 [2024-11-18 07:42:32.145677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.219 [2024-11-18 07:42:32.145692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.219 [2024-11-18 07:42:32.145707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.219 [2024-11-18 07:42:32.145722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.219 [2024-11-18 07:42:32.145738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.219 [2024-11-18 07:42:32.145752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.219 [2024-11-18 07:42:32.145768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.219 [2024-11-18 07:42:32.145786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.219 [2024-11-18 07:42:32.145802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.219 [2024-11-18 07:42:32.145816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.219 [2024-11-18 07:42:32.145832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.219 [2024-11-18 07:42:32.145850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.219 [2024-11-18 07:42:32.145888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:08:39.219 [2024-11-18 07:42:32.147072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:39.219 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:39.219 00:08:39.219 Latency(us) 00:08:39.219 [2024-11-18T06:42:32.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.219 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:39.219 Job: Nvme0n1 ended in about 0.39 seconds with error 00:08:39.219 Verification LBA range: start 0x0 length 0x400 00:08:39.219 Nvme0n1 : 0.39 1620.29 101.27 162.03 0.00 34856.00 2852.03 34369.99 00:08:39.219 [2024-11-18T06:42:32.307Z] =================================================================================================================== 00:08:39.219 [2024-11-18T06:42:32.307Z] Total : 1620.29 101.27 162.03 0.00 34856.00 2852.03 34369.99 00:08:39.219 [2024-11-18 07:42:32.149000] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:39.219 [2024-11-18 07:42:32.149028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13eb970 (9): Bad file descriptor 00:08:39.219 07:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.219 07:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:39.219 [2024-11-18 07:42:32.209846] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:40.153 07:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 619086 00:08:40.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (619086) - No such process 00:08:40.153 07:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:40.153 07:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:40.153 07:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:40.153 07:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:40.153 07:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:40.153 07:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.153 07:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.153 07:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.153 { 00:08:40.153 "params": { 00:08:40.153 "name": "Nvme$subsystem", 00:08:40.153 "trtype": "$TEST_TRANSPORT", 00:08:40.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.153 "adrfam": "ipv4", 00:08:40.153 "trsvcid": "$NVMF_PORT", 00:08:40.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.153 "hdgst": ${hdgst:-false}, 00:08:40.153 "ddgst": ${ddgst:-false} 00:08:40.153 }, 00:08:40.153 "method": "bdev_nvme_attach_controller" 00:08:40.153 } 00:08:40.153 EOF 00:08:40.153 )") 00:08:40.153 07:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:40.153 07:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:40.153 07:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:40.153 07:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:40.153 "params": { 00:08:40.153 "name": "Nvme0", 00:08:40.153 "trtype": "tcp", 00:08:40.153 "traddr": "10.0.0.2", 00:08:40.153 "adrfam": "ipv4", 00:08:40.153 "trsvcid": "4420", 00:08:40.153 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:40.153 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:40.153 "hdgst": false, 00:08:40.153 "ddgst": false 00:08:40.153 }, 00:08:40.153 "method": "bdev_nvme_attach_controller" 00:08:40.153 }' 00:08:40.153 [2024-11-18 07:42:33.203624] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:40.153 [2024-11-18 07:42:33.203703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619363 ] 00:08:40.412 [2024-11-18 07:42:33.273885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.412 [2024-11-18 07:42:33.320306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.671 Running I/O for 1 seconds... 00:08:41.605 1664.00 IOPS, 104.00 MiB/s 00:08:41.605 Latency(us) 00:08:41.605 [2024-11-18T06:42:34.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.605 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:41.605 Verification LBA range: start 0x0 length 0x400 00:08:41.605 Nvme0n1 : 1.01 1710.42 106.90 0.00 0.00 36801.47 4587.52 33399.09 00:08:41.605 [2024-11-18T06:42:34.693Z] =================================================================================================================== 00:08:41.605 [2024-11-18T06:42:34.693Z] Total : 1710.42 106.90 0.00 0.00 36801.47 4587.52 33399.09 00:08:41.863 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:41.863 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:41.863 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:41.863 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:41.863 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:41.863 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:41.864 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:41.864 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:41.864 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:41.864 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:41.864 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:41.864 rmmod nvme_tcp 00:08:41.864 rmmod nvme_fabrics 00:08:41.864 rmmod nvme_keyring 00:08:41.864 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:41.864 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:41.864 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:41.864 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 619038 ']' 00:08:41.864 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 619038 00:08:41.864 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 619038 ']' 00:08:41.864 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 619038 00:08:41.864 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:41.864 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.864 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 619038 00:08:41.864 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:41.864 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:41.864 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 619038' 00:08:41.864 killing process with pid 619038 00:08:41.864 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 619038 00:08:41.864 07:42:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 619038 00:08:42.123 [2024-11-18 07:42:35.061394] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:42.123 07:42:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:42.123 07:42:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:42.123 07:42:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:42.123 07:42:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:42.123 07:42:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:42.123 07:42:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:42.123 07:42:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:42.123 07:42:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:42.123 07:42:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:42.123 07:42:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.123 07:42:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.123 07:42:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:44.660 00:08:44.660 real 0m8.844s 00:08:44.660 user 0m19.724s 00:08:44.660 sys 0m2.690s 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:44.660 ************************************ 00:08:44.660 END TEST nvmf_host_management 00:08:44.660 ************************************ 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:44.660 ************************************ 00:08:44.660 START TEST nvmf_lvol 00:08:44.660 ************************************ 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:44.660 * Looking for test storage... 00:08:44.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:44.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.660 --rc genhtml_branch_coverage=1 00:08:44.660 --rc genhtml_function_coverage=1 00:08:44.660 --rc genhtml_legend=1 00:08:44.660 --rc geninfo_all_blocks=1 00:08:44.660 --rc geninfo_unexecuted_blocks=1 00:08:44.660 00:08:44.660 ' 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:44.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.660 --rc genhtml_branch_coverage=1 00:08:44.660 --rc genhtml_function_coverage=1 00:08:44.660 --rc genhtml_legend=1 00:08:44.660 --rc geninfo_all_blocks=1 00:08:44.660 --rc geninfo_unexecuted_blocks=1 00:08:44.660 00:08:44.660 ' 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:44.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.660 --rc genhtml_branch_coverage=1 00:08:44.660 --rc genhtml_function_coverage=1 00:08:44.660 --rc genhtml_legend=1 00:08:44.660 --rc geninfo_all_blocks=1 00:08:44.660 --rc geninfo_unexecuted_blocks=1 00:08:44.660 00:08:44.660 ' 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:44.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.660 --rc genhtml_branch_coverage=1 00:08:44.660 --rc genhtml_function_coverage=1 00:08:44.660 --rc genhtml_legend=1 00:08:44.660 --rc geninfo_all_blocks=1 00:08:44.660 --rc geninfo_unexecuted_blocks=1 00:08:44.660 00:08:44.660 ' 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.660 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:44.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:44.661 07:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:46.565 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:46.566 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:46.566 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:46.566 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:46.566 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.566 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:46.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:08:46.826 00:08:46.826 --- 10.0.0.2 ping statistics --- 00:08:46.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.826 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:08:46.826 00:08:46.826 --- 10.0.0.1 ping statistics --- 00:08:46.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.826 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=621580 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 621580 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 621580 ']' 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.826 07:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:46.826 [2024-11-18 07:42:39.797636] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:46.826 [2024-11-18 07:42:39.797716] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.826 [2024-11-18 07:42:39.868209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:46.826 [2024-11-18 07:42:39.910812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.826 [2024-11-18 07:42:39.910882] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.826 [2024-11-18 07:42:39.910896] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.826 [2024-11-18 07:42:39.910907] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.826 [2024-11-18 07:42:39.910932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.826 [2024-11-18 07:42:39.912423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.826 [2024-11-18 07:42:39.912482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.826 [2024-11-18 07:42:39.912488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.085 07:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.085 07:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:47.085 07:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:47.085 07:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:47.085 07:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:47.085 07:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.085 07:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:47.343 [2024-11-18 07:42:40.305250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.343 07:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:47.601 07:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:47.601 07:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:47.859 07:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:47.859 07:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:48.117 07:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:48.683 07:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=adfda772-998e-491f-9ee0-0c801c2f1f3e 00:08:48.683 07:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u adfda772-998e-491f-9ee0-0c801c2f1f3e lvol 20 00:08:48.940 07:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0eb64dae-e644-4779-bb92-2d9d4027bd9c 00:08:48.940 07:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:49.198 07:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0eb64dae-e644-4779-bb92-2d9d4027bd9c 00:08:49.456 07:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:49.714 [2024-11-18 07:42:42.568077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.714 07:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:49.972 07:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=621899 00:08:49.972 07:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:49.972 07:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:50.906 07:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0eb64dae-e644-4779-bb92-2d9d4027bd9c MY_SNAPSHOT 00:08:51.164 07:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=63d66147-aeda-4d65-9abd-4ab0d8e6c551 00:08:51.164 07:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0eb64dae-e644-4779-bb92-2d9d4027bd9c 30 00:08:51.422 07:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 63d66147-aeda-4d65-9abd-4ab0d8e6c551 MY_CLONE 00:08:51.988 07:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=28a6fcc7-e781-4ea0-9693-63cc778c0324 00:08:51.988 07:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 28a6fcc7-e781-4ea0-9693-63cc778c0324 00:08:52.555 07:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 621899 00:09:00.668 Initializing NVMe Controllers 00:09:00.668 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:00.668 Controller IO queue size 128, less than required. 00:09:00.668 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:00.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:00.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:00.668 Initialization complete. Launching workers. 00:09:00.668 ======================================================== 00:09:00.668 Latency(us) 00:09:00.668 Device Information : IOPS MiB/s Average min max 00:09:00.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10386.70 40.57 12333.75 589.01 79357.31 00:09:00.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10341.10 40.39 12385.05 2474.93 75820.63 00:09:00.668 ======================================================== 00:09:00.668 Total : 20727.80 80.97 12359.34 589.01 79357.31 00:09:00.668 00:09:00.668 07:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:00.668 07:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0eb64dae-e644-4779-bb92-2d9d4027bd9c 00:09:00.926 07:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u adfda772-998e-491f-9ee0-0c801c2f1f3e 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:01.185 rmmod nvme_tcp 00:09:01.185 rmmod nvme_fabrics 00:09:01.185 rmmod nvme_keyring 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 621580 ']' 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 621580 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 621580 ']' 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 621580 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 621580 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 621580' 00:09:01.185 killing process with pid 621580 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 621580 00:09:01.185 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 621580 00:09:01.445 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:01.445 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:01.445 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:01.445 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:01.445 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:01.445 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:01.445 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:01.445 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:01.445 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:01.445 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.445 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.445 07:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:03.983 00:09:03.983 real 0m19.346s 00:09:03.983 user 1m5.569s 00:09:03.983 sys 0m5.643s 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:03.983 ************************************ 00:09:03.983 END TEST nvmf_lvol 00:09:03.983 ************************************ 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:03.983 ************************************ 00:09:03.983 START TEST nvmf_lvs_grow 00:09:03.983 ************************************ 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:03.983 * Looking for test storage... 00:09:03.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.983 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:03.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.984 --rc genhtml_branch_coverage=1 00:09:03.984 --rc genhtml_function_coverage=1 00:09:03.984 --rc genhtml_legend=1 00:09:03.984 --rc geninfo_all_blocks=1 00:09:03.984 --rc geninfo_unexecuted_blocks=1 00:09:03.984 00:09:03.984 ' 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:03.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.984 --rc genhtml_branch_coverage=1 00:09:03.984 --rc genhtml_function_coverage=1 00:09:03.984 --rc genhtml_legend=1 00:09:03.984 --rc geninfo_all_blocks=1 00:09:03.984 --rc geninfo_unexecuted_blocks=1 00:09:03.984 00:09:03.984 ' 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:03.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.984 --rc genhtml_branch_coverage=1 00:09:03.984 --rc genhtml_function_coverage=1 00:09:03.984 --rc genhtml_legend=1 00:09:03.984 --rc geninfo_all_blocks=1 00:09:03.984 --rc geninfo_unexecuted_blocks=1 00:09:03.984 00:09:03.984 ' 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:03.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.984 --rc genhtml_branch_coverage=1 00:09:03.984 --rc genhtml_function_coverage=1 00:09:03.984 --rc genhtml_legend=1 00:09:03.984 --rc geninfo_all_blocks=1 00:09:03.984 --rc geninfo_unexecuted_blocks=1 00:09:03.984 00:09:03.984 ' 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:03.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:03.984 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:03.985 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:03.985 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.985 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.985 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.985 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:03.985 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:03.985 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:03.985 07:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:05.888 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:05.888 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:05.888 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.888 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:05.889 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.889 07:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:06.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:09:06.149 00:09:06.149 --- 10.0.0.2 ping statistics --- 00:09:06.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.149 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:09:06.149 00:09:06.149 --- 10.0.0.1 ping statistics --- 00:09:06.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.149 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=625297 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 625297 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 625297 ']' 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.149 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:06.149 [2024-11-18 07:42:59.217207] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:06.149 [2024-11-18 07:42:59.217295] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.407 [2024-11-18 07:42:59.288554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.407 [2024-11-18 07:42:59.331157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.407 [2024-11-18 07:42:59.331217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.407 [2024-11-18 07:42:59.331245] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.407 [2024-11-18 07:42:59.331256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.407 [2024-11-18 07:42:59.331266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.407 [2024-11-18 07:42:59.331856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.407 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.407 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:06.407 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:06.407 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:06.407 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:06.407 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.407 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:06.666 [2024-11-18 07:42:59.698710] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.666 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:06.666 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.666 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.666 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:06.666 ************************************ 00:09:06.666 START TEST lvs_grow_clean 00:09:06.666 ************************************ 00:09:06.666 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:06.666 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:06.666 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:06.666 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:06.666 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:06.666 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:06.666 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:06.666 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:06.666 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:06.925 07:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:07.184 07:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:07.184 07:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:07.443 07:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=eca8fb63-fb4f-4b49-8531-0cf803dfd05c 00:09:07.443 07:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eca8fb63-fb4f-4b49-8531-0cf803dfd05c 00:09:07.443 07:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:07.702 07:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:07.702 07:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:07.702 07:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u eca8fb63-fb4f-4b49-8531-0cf803dfd05c lvol 150 00:09:07.960 07:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5962eb0d-4983-4b3a-9f1e-d34b9223d319 00:09:07.960 07:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:07.960 07:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:08.218 [2024-11-18 07:43:01.163073] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:08.218 [2024-11-18 07:43:01.163177] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:08.218 true 00:09:08.218 07:43:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eca8fb63-fb4f-4b49-8531-0cf803dfd05c 00:09:08.218 07:43:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:08.476 07:43:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:08.476 07:43:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:08.734 07:43:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5962eb0d-4983-4b3a-9f1e-d34b9223d319 00:09:08.992 07:43:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:09.252 [2024-11-18 07:43:02.258377] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.252 07:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:09.511 07:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=625735 00:09:09.511 07:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:09.511 07:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:09.511 07:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 625735 /var/tmp/bdevperf.sock 00:09:09.511 07:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 625735 ']' 00:09:09.511 07:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:09.511 07:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.511 07:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:09.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:09.511 07:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.511 07:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:09.511 [2024-11-18 07:43:02.591622] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:09.511 [2024-11-18 07:43:02.591699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid625735 ] 00:09:09.770 [2024-11-18 07:43:02.658990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.770 [2024-11-18 07:43:02.703777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.770 07:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.770 07:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:09.770 07:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:10.337 Nvme0n1 00:09:10.337 07:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:10.595 [ 00:09:10.595 { 00:09:10.595 "name": "Nvme0n1", 00:09:10.595 "aliases": [ 00:09:10.595 "5962eb0d-4983-4b3a-9f1e-d34b9223d319" 00:09:10.595 ], 00:09:10.595 "product_name": "NVMe disk", 00:09:10.595 "block_size": 4096, 00:09:10.595 "num_blocks": 38912, 00:09:10.595 "uuid": "5962eb0d-4983-4b3a-9f1e-d34b9223d319", 00:09:10.595 "numa_id": 0, 00:09:10.595 "assigned_rate_limits": { 00:09:10.595 "rw_ios_per_sec": 0, 00:09:10.595 "rw_mbytes_per_sec": 0, 00:09:10.595 "r_mbytes_per_sec": 0, 00:09:10.595 "w_mbytes_per_sec": 0 00:09:10.595 }, 00:09:10.595 "claimed": false, 00:09:10.595 "zoned": false, 00:09:10.595 "supported_io_types": { 00:09:10.595 "read": true, 00:09:10.595 "write": true, 00:09:10.595 "unmap": true, 00:09:10.595 "flush": true, 00:09:10.595 "reset": true, 00:09:10.595 "nvme_admin": true, 00:09:10.595 "nvme_io": true, 00:09:10.595 "nvme_io_md": false, 00:09:10.595 "write_zeroes": true, 00:09:10.595 "zcopy": false, 00:09:10.595 "get_zone_info": false, 00:09:10.595 "zone_management": false, 00:09:10.595 "zone_append": false, 00:09:10.595 "compare": true, 00:09:10.596 "compare_and_write": true, 00:09:10.596 "abort": true, 00:09:10.596 "seek_hole": false, 00:09:10.596 "seek_data": false, 00:09:10.596 "copy": true, 00:09:10.596 "nvme_iov_md": false 00:09:10.596 }, 00:09:10.596 "memory_domains": [ 00:09:10.596 { 00:09:10.596 "dma_device_id": "system", 00:09:10.596 "dma_device_type": 1 00:09:10.596 } 00:09:10.596 ], 00:09:10.596 "driver_specific": { 00:09:10.596 "nvme": [ 00:09:10.596 { 00:09:10.596 "trid": { 00:09:10.596 "trtype": "TCP", 00:09:10.596 "adrfam": "IPv4", 00:09:10.596 "traddr": "10.0.0.2", 00:09:10.596 "trsvcid": "4420", 00:09:10.596 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:10.596 }, 00:09:10.596 "ctrlr_data": { 00:09:10.596 "cntlid": 1, 00:09:10.596 "vendor_id": "0x8086", 00:09:10.596 "model_number": "SPDK bdev Controller", 00:09:10.596 "serial_number": "SPDK0", 00:09:10.596 "firmware_revision": "25.01", 00:09:10.596 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:10.596 "oacs": { 00:09:10.596 "security": 0, 00:09:10.596 "format": 0, 00:09:10.596 "firmware": 0, 00:09:10.596 "ns_manage": 0 00:09:10.596 }, 00:09:10.596 "multi_ctrlr": true, 00:09:10.596 "ana_reporting": false 00:09:10.596 }, 00:09:10.596 "vs": { 00:09:10.596 "nvme_version": "1.3" 00:09:10.596 }, 00:09:10.596 "ns_data": { 00:09:10.596 "id": 1, 00:09:10.596 "can_share": true 00:09:10.596 } 00:09:10.596 } 00:09:10.596 ], 00:09:10.596 "mp_policy": "active_passive" 00:09:10.596 } 00:09:10.596 } 00:09:10.596 ] 00:09:10.596 07:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=625869 00:09:10.596 07:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:10.596 07:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:10.596 Running I/O for 10 seconds... 00:09:11.531 Latency(us) 00:09:11.531 [2024-11-18T06:43:04.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.531 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.531 Nvme0n1 : 1.00 14877.00 58.11 0.00 0.00 0.00 0.00 0.00 00:09:11.531 [2024-11-18T06:43:04.619Z] =================================================================================================================== 00:09:11.531 [2024-11-18T06:43:04.619Z] Total : 14877.00 58.11 0.00 0.00 0.00 0.00 0.00 00:09:11.531 00:09:12.466 07:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u eca8fb63-fb4f-4b49-8531-0cf803dfd05c 00:09:12.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.725 Nvme0n1 : 2.00 15122.00 59.07 0.00 0.00 0.00 0.00 0.00 00:09:12.725 [2024-11-18T06:43:05.813Z] =================================================================================================================== 00:09:12.725 [2024-11-18T06:43:05.813Z] Total : 15122.00 59.07 0.00 0.00 0.00 0.00 0.00 00:09:12.725 00:09:12.725 true 00:09:12.725 07:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eca8fb63-fb4f-4b49-8531-0cf803dfd05c 00:09:12.725 07:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:12.983 07:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:12.983 07:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:12.983 07:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 625869 00:09:13.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.551 Nvme0n1 : 3.00 15161.33 59.22 0.00 0.00 0.00 0.00 0.00 00:09:13.551 [2024-11-18T06:43:06.639Z] =================================================================================================================== 00:09:13.551 [2024-11-18T06:43:06.639Z] Total : 15161.33 59.22 0.00 0.00 0.00 0.00 0.00 00:09:13.551 00:09:14.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.486 Nvme0n1 : 4.00 15244.50 59.55 0.00 0.00 0.00 0.00 0.00 00:09:14.486 [2024-11-18T06:43:07.574Z] =================================================================================================================== 00:09:14.486 [2024-11-18T06:43:07.574Z] Total : 15244.50 59.55 0.00 0.00 0.00 0.00 0.00 00:09:14.486 00:09:15.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.862 Nvme0n1 : 5.00 15294.40 59.74 0.00 0.00 0.00 0.00 0.00 00:09:15.862 [2024-11-18T06:43:08.950Z] =================================================================================================================== 00:09:15.862 [2024-11-18T06:43:08.950Z] Total : 15294.40 59.74 0.00 0.00 0.00 0.00 0.00 00:09:15.862 00:09:16.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.797 Nvme0n1 : 6.00 15348.83 59.96 0.00 0.00 0.00 0.00 0.00 00:09:16.797 [2024-11-18T06:43:09.885Z] =================================================================================================================== 00:09:16.797 [2024-11-18T06:43:09.885Z] Total : 15348.83 59.96 0.00 0.00 0.00 0.00 0.00 00:09:16.797 00:09:17.733 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.733 Nvme0n1 : 7.00 15360.71 60.00 0.00 0.00 0.00 0.00 0.00 00:09:17.733 [2024-11-18T06:43:10.821Z] =================================================================================================================== 00:09:17.733 [2024-11-18T06:43:10.821Z] Total : 15360.71 60.00 0.00 0.00 0.00 0.00 0.00 00:09:17.733 00:09:18.698 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.698 Nvme0n1 : 8.00 15393.25 60.13 0.00 0.00 0.00 0.00 0.00 00:09:18.698 [2024-11-18T06:43:11.786Z] =================================================================================================================== 00:09:18.698 [2024-11-18T06:43:11.786Z] Total : 15393.25 60.13 0.00 0.00 0.00 0.00 0.00 00:09:18.698 00:09:19.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.681 Nvme0n1 : 9.00 15432.89 60.28 0.00 0.00 0.00 0.00 0.00 00:09:19.681 [2024-11-18T06:43:12.769Z] =================================================================================================================== 00:09:19.681 [2024-11-18T06:43:12.769Z] Total : 15432.89 60.28 0.00 0.00 0.00 0.00 0.00 00:09:19.681 00:09:20.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.615 Nvme0n1 : 10.00 15455.20 60.37 0.00 0.00 0.00 0.00 0.00 00:09:20.615 [2024-11-18T06:43:13.703Z] =================================================================================================================== 00:09:20.615 [2024-11-18T06:43:13.703Z] Total : 15455.20 60.37 0.00 0.00 0.00 0.00 0.00 00:09:20.615 00:09:20.615 00:09:20.615 Latency(us) 00:09:20.615 [2024-11-18T06:43:13.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.615 Nvme0n1 : 10.00 15460.59 60.39 0.00 0.00 8274.63 4271.98 17864.63 00:09:20.615 [2024-11-18T06:43:13.704Z] =================================================================================================================== 00:09:20.616 [2024-11-18T06:43:13.704Z] Total : 15460.59 60.39 0.00 0.00 8274.63 4271.98 17864.63 00:09:20.616 { 00:09:20.616 "results": [ 00:09:20.616 { 00:09:20.616 "job": "Nvme0n1", 00:09:20.616 "core_mask": "0x2", 00:09:20.616 "workload": "randwrite", 00:09:20.616 "status": "finished", 00:09:20.616 "queue_depth": 128, 00:09:20.616 "io_size": 4096, 00:09:20.616 "runtime": 10.004795, 00:09:20.616 "iops": 15460.586648701947, 00:09:20.616 "mibps": 60.39291659649198, 00:09:20.616 "io_failed": 0, 00:09:20.616 "io_timeout": 0, 00:09:20.616 "avg_latency_us": 8274.633678246128, 00:09:20.616 "min_latency_us": 4271.976296296296, 00:09:20.616 "max_latency_us": 17864.62814814815 00:09:20.616 } 00:09:20.616 ], 00:09:20.616 "core_count": 1 00:09:20.616 } 00:09:20.616 07:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 625735 00:09:20.616 07:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 625735 ']' 00:09:20.616 07:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 625735 00:09:20.616 07:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:20.616 07:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.616 07:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 625735 00:09:20.616 07:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:20.616 07:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:20.616 07:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 625735' 00:09:20.616 killing process with pid 625735 00:09:20.616 07:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 625735 00:09:20.616 Received shutdown signal, test time was about 10.000000 seconds 00:09:20.616 00:09:20.616 Latency(us) 00:09:20.616 [2024-11-18T06:43:13.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.616 [2024-11-18T06:43:13.704Z] =================================================================================================================== 00:09:20.616 [2024-11-18T06:43:13.704Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:20.616 07:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 625735 00:09:20.874 07:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:21.132 07:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:21.390 07:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eca8fb63-fb4f-4b49-8531-0cf803dfd05c 00:09:21.390 07:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:21.649 07:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:21.649 07:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:21.649 07:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:21.907 [2024-11-18 07:43:14.902339] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:21.907 07:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eca8fb63-fb4f-4b49-8531-0cf803dfd05c 00:09:21.907 07:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:21.907 07:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eca8fb63-fb4f-4b49-8531-0cf803dfd05c 00:09:21.907 07:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:21.907 07:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.907 07:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:21.907 07:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.907 07:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:21.907 07:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.907 07:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:21.907 07:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:21.907 07:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eca8fb63-fb4f-4b49-8531-0cf803dfd05c 00:09:22.166 request: 00:09:22.166 { 00:09:22.166 "uuid": "eca8fb63-fb4f-4b49-8531-0cf803dfd05c", 00:09:22.166 "method": "bdev_lvol_get_lvstores", 00:09:22.166 "req_id": 1 00:09:22.166 } 00:09:22.166 Got JSON-RPC error response 00:09:22.166 response: 00:09:22.166 { 00:09:22.166 "code": -19, 00:09:22.166 "message": "No such device" 00:09:22.166 } 00:09:22.166 07:43:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:22.166 07:43:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:22.166 07:43:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:22.166 07:43:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:22.166 07:43:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:22.425 aio_bdev 00:09:22.425 07:43:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5962eb0d-4983-4b3a-9f1e-d34b9223d319 00:09:22.425 07:43:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=5962eb0d-4983-4b3a-9f1e-d34b9223d319 00:09:22.425 07:43:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.425 07:43:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:22.425 07:43:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.425 07:43:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.425 07:43:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:22.683 07:43:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5962eb0d-4983-4b3a-9f1e-d34b9223d319 -t 2000 00:09:22.941 [ 00:09:22.941 { 00:09:22.941 "name": "5962eb0d-4983-4b3a-9f1e-d34b9223d319", 00:09:22.941 "aliases": [ 00:09:22.941 "lvs/lvol" 00:09:22.941 ], 00:09:22.941 "product_name": "Logical Volume", 00:09:22.941 "block_size": 4096, 00:09:22.941 "num_blocks": 38912, 00:09:22.941 "uuid": "5962eb0d-4983-4b3a-9f1e-d34b9223d319", 00:09:22.941 "assigned_rate_limits": { 00:09:22.941 "rw_ios_per_sec": 0, 00:09:22.941 "rw_mbytes_per_sec": 0, 00:09:22.941 "r_mbytes_per_sec": 0, 00:09:22.941 "w_mbytes_per_sec": 0 00:09:22.941 }, 00:09:22.941 "claimed": false, 00:09:22.941 "zoned": false, 00:09:22.941 "supported_io_types": { 00:09:22.941 "read": true, 00:09:22.941 "write": true, 00:09:22.941 "unmap": true, 00:09:22.941 "flush": false, 00:09:22.941 "reset": true, 00:09:22.941 "nvme_admin": false, 00:09:22.941 "nvme_io": false, 00:09:22.941 "nvme_io_md": false, 00:09:22.941 "write_zeroes": true, 00:09:22.941 "zcopy": false, 00:09:22.941 "get_zone_info": false, 00:09:22.941 "zone_management": false, 00:09:22.941 "zone_append": false, 00:09:22.941 "compare": false, 00:09:22.941 "compare_and_write": false, 00:09:22.941 "abort": false, 00:09:22.941 "seek_hole": true, 00:09:22.941 "seek_data": true, 00:09:22.941 "copy": false, 00:09:22.941 "nvme_iov_md": false 00:09:22.941 }, 00:09:22.941 "driver_specific": { 00:09:22.941 "lvol": { 00:09:22.941 "lvol_store_uuid": "eca8fb63-fb4f-4b49-8531-0cf803dfd05c", 00:09:22.941 "base_bdev": "aio_bdev", 00:09:22.941 "thin_provision": false, 00:09:22.941 "num_allocated_clusters": 38, 00:09:22.941 "snapshot": false, 00:09:22.941 "clone": false, 00:09:22.941 "esnap_clone": false 00:09:22.941 } 00:09:22.941 } 00:09:22.941 } 00:09:22.941 ] 00:09:22.941 07:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:22.941 07:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eca8fb63-fb4f-4b49-8531-0cf803dfd05c 00:09:22.941 07:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:23.507 07:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:23.507 07:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eca8fb63-fb4f-4b49-8531-0cf803dfd05c 00:09:23.507 07:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:23.507 07:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:23.507 07:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5962eb0d-4983-4b3a-9f1e-d34b9223d319 00:09:23.766 07:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eca8fb63-fb4f-4b49-8531-0cf803dfd05c 00:09:24.332 07:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:24.332 07:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:24.332 00:09:24.332 real 0m17.662s 00:09:24.332 user 0m17.240s 00:09:24.332 sys 0m1.802s 00:09:24.332 07:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.332 07:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:24.332 ************************************ 00:09:24.332 END TEST lvs_grow_clean 00:09:24.332 ************************************ 00:09:24.590 07:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:24.590 07:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:24.590 07:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.590 07:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:24.590 ************************************ 00:09:24.590 START TEST lvs_grow_dirty 00:09:24.590 ************************************ 00:09:24.590 07:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:24.590 07:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:24.590 07:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:24.590 07:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:24.590 07:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:24.590 07:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:24.590 07:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:24.590 07:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:24.590 07:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:24.590 07:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:24.849 07:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:24.849 07:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:25.108 07:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=776b0b06-e858-4bc7-8845-c4cf011a6329 00:09:25.108 07:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 776b0b06-e858-4bc7-8845-c4cf011a6329 00:09:25.108 07:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:25.366 07:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:25.366 07:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:25.366 07:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 776b0b06-e858-4bc7-8845-c4cf011a6329 lvol 150 00:09:25.625 07:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e547c824-78e1-4255-9b22-2b9f3b6126fc 00:09:25.625 07:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:25.625 07:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:25.883 [2024-11-18 07:43:18.830026] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:25.883 [2024-11-18 07:43:18.830116] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:25.883 true 00:09:25.883 07:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 776b0b06-e858-4bc7-8845-c4cf011a6329 00:09:25.883 07:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:26.141 07:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:26.141 07:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:26.398 07:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e547c824-78e1-4255-9b22-2b9f3b6126fc 00:09:26.656 07:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:26.914 [2024-11-18 07:43:19.917284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.914 07:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:27.171 07:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=627809 00:09:27.171 07:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:27.171 07:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:27.171 07:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 627809 /var/tmp/bdevperf.sock 00:09:27.171 07:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 627809 ']' 00:09:27.171 07:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:27.171 07:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.171 07:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:27.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:27.171 07:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.171 07:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:27.171 [2024-11-18 07:43:20.256575] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:27.171 [2024-11-18 07:43:20.256683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid627809 ] 00:09:27.429 [2024-11-18 07:43:20.325612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.429 [2024-11-18 07:43:20.373612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.429 07:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.429 07:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:27.429 07:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:27.994 Nvme0n1 00:09:27.994 07:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:28.252 [ 00:09:28.252 { 00:09:28.252 "name": "Nvme0n1", 00:09:28.252 "aliases": [ 00:09:28.252 "e547c824-78e1-4255-9b22-2b9f3b6126fc" 00:09:28.252 ], 00:09:28.252 "product_name": "NVMe disk", 00:09:28.252 "block_size": 4096, 00:09:28.252 "num_blocks": 38912, 00:09:28.252 "uuid": "e547c824-78e1-4255-9b22-2b9f3b6126fc", 00:09:28.252 "numa_id": 0, 00:09:28.252 "assigned_rate_limits": { 00:09:28.252 "rw_ios_per_sec": 0, 00:09:28.252 "rw_mbytes_per_sec": 0, 00:09:28.252 "r_mbytes_per_sec": 0, 00:09:28.252 "w_mbytes_per_sec": 0 00:09:28.252 }, 00:09:28.252 "claimed": false, 00:09:28.252 "zoned": false, 00:09:28.252 "supported_io_types": { 00:09:28.252 "read": true, 00:09:28.252 "write": true, 00:09:28.252 "unmap": true, 00:09:28.252 "flush": true, 00:09:28.252 "reset": true, 00:09:28.252 "nvme_admin": true, 00:09:28.252 "nvme_io": true, 00:09:28.252 "nvme_io_md": false, 00:09:28.252 "write_zeroes": true, 00:09:28.252 "zcopy": false, 00:09:28.252 "get_zone_info": false, 00:09:28.252 "zone_management": false, 00:09:28.252 "zone_append": false, 00:09:28.252 "compare": true, 00:09:28.252 "compare_and_write": true, 00:09:28.252 "abort": true, 00:09:28.252 "seek_hole": false, 00:09:28.252 "seek_data": false, 00:09:28.252 "copy": true, 00:09:28.252 "nvme_iov_md": false 00:09:28.252 }, 00:09:28.252 "memory_domains": [ 00:09:28.252 { 00:09:28.252 "dma_device_id": "system", 00:09:28.252 "dma_device_type": 1 00:09:28.252 } 00:09:28.252 ], 00:09:28.252 "driver_specific": { 00:09:28.252 "nvme": [ 00:09:28.252 { 00:09:28.252 "trid": { 00:09:28.252 "trtype": "TCP", 00:09:28.252 "adrfam": "IPv4", 00:09:28.252 "traddr": "10.0.0.2", 00:09:28.252 "trsvcid": "4420", 00:09:28.252 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:28.252 }, 00:09:28.252 "ctrlr_data": { 00:09:28.252 "cntlid": 1, 00:09:28.252 "vendor_id": "0x8086", 00:09:28.252 "model_number": "SPDK bdev Controller", 00:09:28.252 "serial_number": "SPDK0", 00:09:28.252 "firmware_revision": "25.01", 00:09:28.252 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:28.252 "oacs": { 00:09:28.252 "security": 0, 00:09:28.252 "format": 0, 00:09:28.252 "firmware": 0, 00:09:28.253 "ns_manage": 0 00:09:28.253 }, 00:09:28.253 "multi_ctrlr": true, 00:09:28.253 "ana_reporting": false 00:09:28.253 }, 00:09:28.253 "vs": { 00:09:28.253 "nvme_version": "1.3" 00:09:28.253 }, 00:09:28.253 "ns_data": { 00:09:28.253 "id": 1, 00:09:28.253 "can_share": true 00:09:28.253 } 00:09:28.253 } 00:09:28.253 ], 00:09:28.253 "mp_policy": "active_passive" 00:09:28.253 } 00:09:28.253 } 00:09:28.253 ] 00:09:28.253 07:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=627947 00:09:28.253 07:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:28.253 07:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:28.253 Running I/O for 10 seconds... 00:09:29.626 Latency(us) 00:09:29.626 [2024-11-18T06:43:22.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.626 Nvme0n1 : 1.00 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:09:29.626 [2024-11-18T06:43:22.714Z] =================================================================================================================== 00:09:29.626 [2024-11-18T06:43:22.715Z] Total : 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:09:29.627 00:09:30.192 07:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 776b0b06-e858-4bc7-8845-c4cf011a6329 00:09:30.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.450 Nvme0n1 : 2.00 15115.00 59.04 0.00 0.00 0.00 0.00 0.00 00:09:30.450 [2024-11-18T06:43:23.538Z] =================================================================================================================== 00:09:30.450 [2024-11-18T06:43:23.538Z] Total : 15115.00 59.04 0.00 0.00 0.00 0.00 0.00 00:09:30.450 00:09:30.450 true 00:09:30.450 07:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 776b0b06-e858-4bc7-8845-c4cf011a6329 00:09:30.450 07:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:30.709 07:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:30.709 07:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:30.709 07:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 627947 00:09:31.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.275 Nvme0n1 : 3.00 15168.00 59.25 0.00 0.00 0.00 0.00 0.00 00:09:31.275 [2024-11-18T06:43:24.363Z] =================================================================================================================== 00:09:31.275 [2024-11-18T06:43:24.363Z] Total : 15168.00 59.25 0.00 0.00 0.00 0.00 0.00 00:09:31.275 00:09:32.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.649 Nvme0n1 : 4.00 15281.25 59.69 0.00 0.00 0.00 0.00 0.00 00:09:32.649 [2024-11-18T06:43:25.737Z] =================================================================================================================== 00:09:32.649 [2024-11-18T06:43:25.737Z] Total : 15281.25 59.69 0.00 0.00 0.00 0.00 0.00 00:09:32.649 00:09:33.582 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.582 Nvme0n1 : 5.00 15349.20 59.96 0.00 0.00 0.00 0.00 0.00 00:09:33.582 [2024-11-18T06:43:26.670Z] =================================================================================================================== 00:09:33.582 [2024-11-18T06:43:26.670Z] Total : 15349.20 59.96 0.00 0.00 0.00 0.00 0.00 00:09:33.582 00:09:34.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.515 Nvme0n1 : 6.00 15394.50 60.13 0.00 0.00 0.00 0.00 0.00 00:09:34.515 [2024-11-18T06:43:27.603Z] =================================================================================================================== 00:09:34.515 [2024-11-18T06:43:27.603Z] Total : 15394.50 60.13 0.00 0.00 0.00 0.00 0.00 00:09:34.515 00:09:35.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.448 Nvme0n1 : 7.00 15445.00 60.33 0.00 0.00 0.00 0.00 0.00 00:09:35.449 [2024-11-18T06:43:28.537Z] =================================================================================================================== 00:09:35.449 [2024-11-18T06:43:28.537Z] Total : 15445.00 60.33 0.00 0.00 0.00 0.00 0.00 00:09:35.449 00:09:36.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.381 Nvme0n1 : 8.00 15482.88 60.48 0.00 0.00 0.00 0.00 0.00 00:09:36.381 [2024-11-18T06:43:29.469Z] =================================================================================================================== 00:09:36.381 [2024-11-18T06:43:29.469Z] Total : 15482.88 60.48 0.00 0.00 0.00 0.00 0.00 00:09:36.381 00:09:37.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.315 Nvme0n1 : 9.00 15491.33 60.51 0.00 0.00 0.00 0.00 0.00 00:09:37.315 [2024-11-18T06:43:30.403Z] =================================================================================================================== 00:09:37.315 [2024-11-18T06:43:30.403Z] Total : 15491.33 60.51 0.00 0.00 0.00 0.00 0.00 00:09:37.315 00:09:38.250 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.250 Nvme0n1 : 10.00 15504.30 60.56 0.00 0.00 0.00 0.00 0.00 00:09:38.250 [2024-11-18T06:43:31.338Z] =================================================================================================================== 00:09:38.250 [2024-11-18T06:43:31.338Z] Total : 15504.30 60.56 0.00 0.00 0.00 0.00 0.00 00:09:38.250 00:09:38.250 00:09:38.250 Latency(us) 00:09:38.250 [2024-11-18T06:43:31.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.250 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.250 Nvme0n1 : 10.01 15506.39 60.57 0.00 0.00 8250.18 4369.07 15825.73 00:09:38.250 [2024-11-18T06:43:31.338Z] =================================================================================================================== 00:09:38.250 [2024-11-18T06:43:31.338Z] Total : 15506.39 60.57 0.00 0.00 8250.18 4369.07 15825.73 00:09:38.250 { 00:09:38.250 "results": [ 00:09:38.250 { 00:09:38.250 "job": "Nvme0n1", 00:09:38.250 "core_mask": "0x2", 00:09:38.250 "workload": "randwrite", 00:09:38.250 "status": "finished", 00:09:38.250 "queue_depth": 128, 00:09:38.250 "io_size": 4096, 00:09:38.250 "runtime": 10.006905, 00:09:38.250 "iops": 15506.392835746918, 00:09:38.250 "mibps": 60.5718470146364, 00:09:38.250 "io_failed": 0, 00:09:38.250 "io_timeout": 0, 00:09:38.250 "avg_latency_us": 8250.177355075655, 00:09:38.250 "min_latency_us": 4369.066666666667, 00:09:38.250 "max_latency_us": 15825.730370370371 00:09:38.250 } 00:09:38.250 ], 00:09:38.250 "core_count": 1 00:09:38.250 } 00:09:38.250 07:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 627809 00:09:38.250 07:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 627809 ']' 00:09:38.250 07:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 627809 00:09:38.508 07:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:38.508 07:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.508 07:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 627809 00:09:38.508 07:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:38.508 07:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:38.508 07:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 627809' 00:09:38.508 killing process with pid 627809 00:09:38.508 07:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 627809 00:09:38.508 Received shutdown signal, test time was about 10.000000 seconds 00:09:38.508 00:09:38.508 Latency(us) 00:09:38.508 [2024-11-18T06:43:31.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.508 [2024-11-18T06:43:31.596Z] =================================================================================================================== 00:09:38.508 [2024-11-18T06:43:31.596Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:38.508 07:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 627809 00:09:38.508 07:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:38.766 07:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:39.334 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 776b0b06-e858-4bc7-8845-c4cf011a6329 00:09:39.334 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:39.334 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:39.334 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:39.334 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 625297 00:09:39.334 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 625297 00:09:39.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 625297 Killed "${NVMF_APP[@]}" "$@" 00:09:39.598 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:39.598 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:39.598 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:39.598 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:39.598 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:39.598 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=629287 00:09:39.598 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:39.598 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 629287 00:09:39.598 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 629287 ']' 00:09:39.598 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.598 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.598 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.598 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.598 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:39.598 [2024-11-18 07:43:32.485776] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:39.598 [2024-11-18 07:43:32.485896] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.598 [2024-11-18 07:43:32.560144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.598 [2024-11-18 07:43:32.606966] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.598 [2024-11-18 07:43:32.607043] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.598 [2024-11-18 07:43:32.607063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.598 [2024-11-18 07:43:32.607074] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.598 [2024-11-18 07:43:32.607083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.598 [2024-11-18 07:43:32.607666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.857 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.857 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:39.857 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:39.857 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:39.857 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:39.857 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.857 07:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:40.115 [2024-11-18 07:43:33.012347] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:40.115 [2024-11-18 07:43:33.012484] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:40.115 [2024-11-18 07:43:33.012561] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:40.115 07:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:40.115 07:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e547c824-78e1-4255-9b22-2b9f3b6126fc 00:09:40.115 07:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e547c824-78e1-4255-9b22-2b9f3b6126fc 00:09:40.115 07:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.115 07:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:40.115 07:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.115 07:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.115 07:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:40.373 07:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e547c824-78e1-4255-9b22-2b9f3b6126fc -t 2000 00:09:40.631 [ 00:09:40.631 { 00:09:40.631 "name": "e547c824-78e1-4255-9b22-2b9f3b6126fc", 00:09:40.631 "aliases": [ 00:09:40.631 "lvs/lvol" 00:09:40.631 ], 00:09:40.631 "product_name": "Logical Volume", 00:09:40.631 "block_size": 4096, 00:09:40.631 "num_blocks": 38912, 00:09:40.631 "uuid": "e547c824-78e1-4255-9b22-2b9f3b6126fc", 00:09:40.631 "assigned_rate_limits": { 00:09:40.631 "rw_ios_per_sec": 0, 00:09:40.631 "rw_mbytes_per_sec": 0, 00:09:40.631 "r_mbytes_per_sec": 0, 00:09:40.631 "w_mbytes_per_sec": 0 00:09:40.631 }, 00:09:40.631 "claimed": false, 00:09:40.631 "zoned": false, 00:09:40.631 "supported_io_types": { 00:09:40.631 "read": true, 00:09:40.631 "write": true, 00:09:40.631 "unmap": true, 00:09:40.631 "flush": false, 00:09:40.631 "reset": true, 00:09:40.631 "nvme_admin": false, 00:09:40.631 "nvme_io": false, 00:09:40.631 "nvme_io_md": false, 00:09:40.631 "write_zeroes": true, 00:09:40.631 "zcopy": false, 00:09:40.631 "get_zone_info": false, 00:09:40.631 "zone_management": false, 00:09:40.631 "zone_append": false, 00:09:40.631 "compare": false, 00:09:40.631 "compare_and_write": false, 00:09:40.631 "abort": false, 00:09:40.631 "seek_hole": true, 00:09:40.631 "seek_data": true, 00:09:40.631 "copy": false, 00:09:40.631 "nvme_iov_md": false 00:09:40.631 }, 00:09:40.631 "driver_specific": { 00:09:40.631 "lvol": { 00:09:40.631 "lvol_store_uuid": "776b0b06-e858-4bc7-8845-c4cf011a6329", 00:09:40.631 "base_bdev": "aio_bdev", 00:09:40.631 "thin_provision": false, 00:09:40.631 "num_allocated_clusters": 38, 00:09:40.631 "snapshot": false, 00:09:40.631 "clone": false, 00:09:40.631 "esnap_clone": false 00:09:40.631 } 00:09:40.631 } 00:09:40.631 } 00:09:40.631 ] 00:09:40.631 07:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:40.631 07:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 776b0b06-e858-4bc7-8845-c4cf011a6329 00:09:40.631 07:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:40.889 07:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:40.889 07:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 776b0b06-e858-4bc7-8845-c4cf011a6329 00:09:40.889 07:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:41.146 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:41.146 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:41.405 [2024-11-18 07:43:34.374041] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:41.405 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 776b0b06-e858-4bc7-8845-c4cf011a6329 00:09:41.405 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:41.405 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 776b0b06-e858-4bc7-8845-c4cf011a6329 00:09:41.405 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.405 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:41.405 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.405 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:41.405 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.405 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:41.405 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.405 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:41.405 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 776b0b06-e858-4bc7-8845-c4cf011a6329 00:09:41.663 request: 00:09:41.663 { 00:09:41.663 "uuid": "776b0b06-e858-4bc7-8845-c4cf011a6329", 00:09:41.663 "method": "bdev_lvol_get_lvstores", 00:09:41.663 "req_id": 1 00:09:41.663 } 00:09:41.663 Got JSON-RPC error response 00:09:41.663 response: 00:09:41.663 { 00:09:41.663 "code": -19, 00:09:41.663 "message": "No such device" 00:09:41.663 } 00:09:41.663 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:41.663 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:41.663 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:41.663 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:41.663 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:41.921 aio_bdev 00:09:41.921 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e547c824-78e1-4255-9b22-2b9f3b6126fc 00:09:41.921 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e547c824-78e1-4255-9b22-2b9f3b6126fc 00:09:41.921 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.921 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:41.921 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.921 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.921 07:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:42.178 07:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e547c824-78e1-4255-9b22-2b9f3b6126fc -t 2000 00:09:42.437 [ 00:09:42.437 { 00:09:42.437 "name": "e547c824-78e1-4255-9b22-2b9f3b6126fc", 00:09:42.437 "aliases": [ 00:09:42.437 "lvs/lvol" 00:09:42.437 ], 00:09:42.437 "product_name": "Logical Volume", 00:09:42.437 "block_size": 4096, 00:09:42.437 "num_blocks": 38912, 00:09:42.437 "uuid": "e547c824-78e1-4255-9b22-2b9f3b6126fc", 00:09:42.437 "assigned_rate_limits": { 00:09:42.437 "rw_ios_per_sec": 0, 00:09:42.437 "rw_mbytes_per_sec": 0, 00:09:42.437 "r_mbytes_per_sec": 0, 00:09:42.437 "w_mbytes_per_sec": 0 00:09:42.437 }, 00:09:42.437 "claimed": false, 00:09:42.437 "zoned": false, 00:09:42.437 "supported_io_types": { 00:09:42.437 "read": true, 00:09:42.437 "write": true, 00:09:42.437 "unmap": true, 00:09:42.437 "flush": false, 00:09:42.437 "reset": true, 00:09:42.437 "nvme_admin": false, 00:09:42.437 "nvme_io": false, 00:09:42.437 "nvme_io_md": false, 00:09:42.437 "write_zeroes": true, 00:09:42.437 "zcopy": false, 00:09:42.437 "get_zone_info": false, 00:09:42.437 "zone_management": false, 00:09:42.437 "zone_append": false, 00:09:42.437 "compare": false, 00:09:42.437 "compare_and_write": false, 00:09:42.437 "abort": false, 00:09:42.437 "seek_hole": true, 00:09:42.437 "seek_data": true, 00:09:42.437 "copy": false, 00:09:42.437 "nvme_iov_md": false 00:09:42.437 }, 00:09:42.437 "driver_specific": { 00:09:42.437 "lvol": { 00:09:42.437 "lvol_store_uuid": "776b0b06-e858-4bc7-8845-c4cf011a6329", 00:09:42.437 "base_bdev": "aio_bdev", 00:09:42.437 "thin_provision": false, 00:09:42.437 "num_allocated_clusters": 38, 00:09:42.437 "snapshot": false, 00:09:42.437 "clone": false, 00:09:42.437 "esnap_clone": false 00:09:42.437 } 00:09:42.437 } 00:09:42.437 } 00:09:42.437 ] 00:09:42.437 07:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:42.437 07:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 776b0b06-e858-4bc7-8845-c4cf011a6329 00:09:42.437 07:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:43.004 07:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:43.004 07:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 776b0b06-e858-4bc7-8845-c4cf011a6329 00:09:43.004 07:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:43.004 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:43.004 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e547c824-78e1-4255-9b22-2b9f3b6126fc 00:09:43.262 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 776b0b06-e858-4bc7-8845-c4cf011a6329 00:09:43.829 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:43.829 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:44.087 00:09:44.087 real 0m19.470s 00:09:44.087 user 0m49.266s 00:09:44.087 sys 0m4.495s 00:09:44.087 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.087 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:44.087 ************************************ 00:09:44.087 END TEST lvs_grow_dirty 00:09:44.087 ************************************ 00:09:44.087 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:44.087 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:44.087 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:44.087 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:44.087 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:44.087 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:44.087 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:44.087 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:44.087 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:44.087 nvmf_trace.0 00:09:44.087 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:44.087 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:44.087 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:44.087 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:44.087 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:44.087 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:44.087 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:44.087 07:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:44.087 rmmod nvme_tcp 00:09:44.087 rmmod nvme_fabrics 00:09:44.087 rmmod nvme_keyring 00:09:44.087 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:44.087 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:44.087 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:44.087 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 629287 ']' 00:09:44.087 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 629287 00:09:44.087 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 629287 ']' 00:09:44.087 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 629287 00:09:44.087 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:44.087 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.087 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 629287 00:09:44.087 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.087 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.087 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 629287' 00:09:44.087 killing process with pid 629287 00:09:44.087 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 629287 00:09:44.087 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 629287 00:09:44.345 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:44.345 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:44.345 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:44.345 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:44.345 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:44.345 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:44.345 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:44.345 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:44.345 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:44.345 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.345 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.345 07:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.290 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:46.290 00:09:46.290 real 0m42.740s 00:09:46.290 user 1m12.577s 00:09:46.290 sys 0m8.308s 00:09:46.290 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.290 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:46.290 ************************************ 00:09:46.290 END TEST nvmf_lvs_grow 00:09:46.290 ************************************ 00:09:46.290 07:43:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:46.291 07:43:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:46.291 07:43:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.291 07:43:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.551 ************************************ 00:09:46.551 START TEST nvmf_bdev_io_wait 00:09:46.551 ************************************ 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:46.551 * Looking for test storage... 00:09:46.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:46.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.551 --rc genhtml_branch_coverage=1 00:09:46.551 --rc genhtml_function_coverage=1 00:09:46.551 --rc genhtml_legend=1 00:09:46.551 --rc geninfo_all_blocks=1 00:09:46.551 --rc geninfo_unexecuted_blocks=1 00:09:46.551 00:09:46.551 ' 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:46.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.551 --rc genhtml_branch_coverage=1 00:09:46.551 --rc genhtml_function_coverage=1 00:09:46.551 --rc genhtml_legend=1 00:09:46.551 --rc geninfo_all_blocks=1 00:09:46.551 --rc geninfo_unexecuted_blocks=1 00:09:46.551 00:09:46.551 ' 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:46.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.551 --rc genhtml_branch_coverage=1 00:09:46.551 --rc genhtml_function_coverage=1 00:09:46.551 --rc genhtml_legend=1 00:09:46.551 --rc geninfo_all_blocks=1 00:09:46.551 --rc geninfo_unexecuted_blocks=1 00:09:46.551 00:09:46.551 ' 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:46.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.551 --rc genhtml_branch_coverage=1 00:09:46.551 --rc genhtml_function_coverage=1 00:09:46.551 --rc genhtml_legend=1 00:09:46.551 --rc geninfo_all_blocks=1 00:09:46.551 --rc geninfo_unexecuted_blocks=1 00:09:46.551 00:09:46.551 ' 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.551 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:46.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:46.552 07:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:49.086 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.086 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:49.086 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:49.086 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:49.086 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:49.086 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:49.086 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:49.086 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:49.086 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:49.086 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:49.086 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:49.086 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:49.086 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:49.086 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:49.086 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:49.086 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.086 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.086 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.086 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:49.087 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:49.087 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:49.087 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:49.087 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:49.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:09:49.087 00:09:49.087 --- 10.0.0.2 ping statistics --- 00:09:49.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.087 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:09:49.087 00:09:49.087 --- 10.0.0.1 ping statistics --- 00:09:49.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.087 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=631853 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 631853 00:09:49.087 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 631853 ']' 00:09:49.088 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.088 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.088 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.088 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.088 07:43:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:49.088 [2024-11-18 07:43:41.868896] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:49.088 [2024-11-18 07:43:41.868990] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.088 [2024-11-18 07:43:41.941770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:49.088 [2024-11-18 07:43:41.986943] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.088 [2024-11-18 07:43:41.987000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.088 [2024-11-18 07:43:41.987024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.088 [2024-11-18 07:43:41.987034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.088 [2024-11-18 07:43:41.987043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.088 [2024-11-18 07:43:41.988577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.088 [2024-11-18 07:43:41.988644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.088 [2024-11-18 07:43:41.988710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:49.088 [2024-11-18 07:43:41.988712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.088 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.088 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:49.088 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:49.088 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:49.088 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:49.088 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.088 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:49.088 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.088 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:49.088 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.088 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:49.088 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.088 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:49.346 [2024-11-18 07:43:42.202434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:49.346 Malloc0 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:49.346 [2024-11-18 07:43:42.255591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=631971 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=631972 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=631975 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:49.346 { 00:09:49.346 "params": { 00:09:49.346 "name": "Nvme$subsystem", 00:09:49.346 "trtype": "$TEST_TRANSPORT", 00:09:49.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:49.346 "adrfam": "ipv4", 00:09:49.346 "trsvcid": "$NVMF_PORT", 00:09:49.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:49.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:49.346 "hdgst": ${hdgst:-false}, 00:09:49.346 "ddgst": ${ddgst:-false} 00:09:49.346 }, 00:09:49.346 "method": "bdev_nvme_attach_controller" 00:09:49.346 } 00:09:49.346 EOF 00:09:49.346 )") 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=631977 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:49.346 { 00:09:49.346 "params": { 00:09:49.346 "name": "Nvme$subsystem", 00:09:49.346 "trtype": "$TEST_TRANSPORT", 00:09:49.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:49.346 "adrfam": "ipv4", 00:09:49.346 "trsvcid": "$NVMF_PORT", 00:09:49.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:49.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:49.346 "hdgst": ${hdgst:-false}, 00:09:49.346 "ddgst": ${ddgst:-false} 00:09:49.346 }, 00:09:49.346 "method": "bdev_nvme_attach_controller" 00:09:49.346 } 00:09:49.346 EOF 00:09:49.346 )") 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:49.346 { 00:09:49.346 "params": { 00:09:49.346 "name": "Nvme$subsystem", 00:09:49.346 "trtype": "$TEST_TRANSPORT", 00:09:49.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:49.346 "adrfam": "ipv4", 00:09:49.346 "trsvcid": "$NVMF_PORT", 00:09:49.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:49.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:49.346 "hdgst": ${hdgst:-false}, 00:09:49.346 "ddgst": ${ddgst:-false} 00:09:49.346 }, 00:09:49.346 "method": "bdev_nvme_attach_controller" 00:09:49.346 } 00:09:49.346 EOF 00:09:49.346 )") 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:49.346 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:49.346 { 00:09:49.346 "params": { 00:09:49.346 "name": "Nvme$subsystem", 00:09:49.346 "trtype": "$TEST_TRANSPORT", 00:09:49.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:49.346 "adrfam": "ipv4", 00:09:49.346 "trsvcid": "$NVMF_PORT", 00:09:49.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:49.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:49.346 "hdgst": ${hdgst:-false}, 00:09:49.346 "ddgst": ${ddgst:-false} 00:09:49.347 }, 00:09:49.347 "method": "bdev_nvme_attach_controller" 00:09:49.347 } 00:09:49.347 EOF 00:09:49.347 )") 00:09:49.347 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:49.347 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:49.347 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 631971 00:09:49.347 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:49.347 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:49.347 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:49.347 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:49.347 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:49.347 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:49.347 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:49.347 "params": { 00:09:49.347 "name": "Nvme1", 00:09:49.347 "trtype": "tcp", 00:09:49.347 "traddr": "10.0.0.2", 00:09:49.347 "adrfam": "ipv4", 00:09:49.347 "trsvcid": "4420", 00:09:49.347 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:49.347 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:49.347 "hdgst": false, 00:09:49.347 "ddgst": false 00:09:49.347 }, 00:09:49.347 "method": "bdev_nvme_attach_controller" 00:09:49.347 }' 00:09:49.347 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:49.347 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:49.347 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:49.347 "params": { 00:09:49.347 "name": "Nvme1", 00:09:49.347 "trtype": "tcp", 00:09:49.347 "traddr": "10.0.0.2", 00:09:49.347 "adrfam": "ipv4", 00:09:49.347 "trsvcid": "4420", 00:09:49.347 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:49.347 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:49.347 "hdgst": false, 00:09:49.347 "ddgst": false 00:09:49.347 }, 00:09:49.347 "method": "bdev_nvme_attach_controller" 00:09:49.347 }' 00:09:49.347 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:49.347 "params": { 00:09:49.347 "name": "Nvme1", 00:09:49.347 "trtype": "tcp", 00:09:49.347 "traddr": "10.0.0.2", 00:09:49.347 "adrfam": "ipv4", 00:09:49.347 "trsvcid": "4420", 00:09:49.347 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:49.347 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:49.347 "hdgst": false, 00:09:49.347 "ddgst": false 00:09:49.347 }, 00:09:49.347 "method": "bdev_nvme_attach_controller" 00:09:49.347 }' 00:09:49.347 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:49.347 07:43:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:49.347 "params": { 00:09:49.347 "name": "Nvme1", 00:09:49.347 "trtype": "tcp", 00:09:49.347 "traddr": "10.0.0.2", 00:09:49.347 "adrfam": "ipv4", 00:09:49.347 "trsvcid": "4420", 00:09:49.347 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:49.347 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:49.347 "hdgst": false, 00:09:49.347 "ddgst": false 00:09:49.347 }, 00:09:49.347 "method": "bdev_nvme_attach_controller" 00:09:49.347 }' 00:09:49.347 [2024-11-18 07:43:42.307031] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:49.347 [2024-11-18 07:43:42.307031] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:49.347 [2024-11-18 07:43:42.307031] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:49.347 [2024-11-18 07:43:42.307119] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 07:43:42.307119] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 07:43:42.307119] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:49.347 --proc-type=auto ] 00:09:49.347 --proc-type=auto ] 00:09:49.347 [2024-11-18 07:43:42.307582] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:49.347 [2024-11-18 07:43:42.307653] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:49.605 [2024-11-18 07:43:42.492805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.605 [2024-11-18 07:43:42.535702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:49.605 [2024-11-18 07:43:42.594969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.605 [2024-11-18 07:43:42.637163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:49.862 [2024-11-18 07:43:42.695054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.862 [2024-11-18 07:43:42.739008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:49.862 [2024-11-18 07:43:42.765688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.862 [2024-11-18 07:43:42.804546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:49.862 Running I/O for 1 seconds... 00:09:49.862 Running I/O for 1 seconds... 00:09:49.862 Running I/O for 1 seconds... 00:09:49.862 Running I/O for 1 seconds... 00:09:51.235 8567.00 IOPS, 33.46 MiB/s [2024-11-18T06:43:44.323Z] 7972.00 IOPS, 31.14 MiB/s [2024-11-18T06:43:44.323Z] 8976.00 IOPS, 35.06 MiB/s 00:09:51.235 Latency(us) 00:09:51.235 [2024-11-18T06:43:44.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.235 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:51.235 Nvme1n1 : 1.01 9040.77 35.32 0.00 0.00 14100.55 5849.69 25243.50 00:09:51.235 [2024-11-18T06:43:44.323Z] =================================================================================================================== 00:09:51.235 [2024-11-18T06:43:44.323Z] Total : 9040.77 35.32 0.00 0.00 14100.55 5849.69 25243.50 00:09:51.235 00:09:51.235 Latency(us) 00:09:51.235 [2024-11-18T06:43:44.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.235 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:51.235 Nvme1n1 : 1.01 8626.67 33.70 0.00 0.00 14766.47 7233.23 27767.85 00:09:51.236 [2024-11-18T06:43:44.324Z] =================================================================================================================== 00:09:51.236 [2024-11-18T06:43:44.324Z] Total : 8626.67 33.70 0.00 0.00 14766.47 7233.23 27767.85 00:09:51.236 00:09:51.236 Latency(us) 00:09:51.236 [2024-11-18T06:43:44.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.236 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:51.236 Nvme1n1 : 1.01 8023.91 31.34 0.00 0.00 15874.23 8495.41 27962.03 00:09:51.236 [2024-11-18T06:43:44.324Z] =================================================================================================================== 00:09:51.236 [2024-11-18T06:43:44.324Z] Total : 8023.91 31.34 0.00 0.00 15874.23 8495.41 27962.03 00:09:51.236 178816.00 IOPS, 698.50 MiB/s 00:09:51.236 Latency(us) 00:09:51.236 [2024-11-18T06:43:44.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.236 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:51.236 Nvme1n1 : 1.00 178480.07 697.19 0.00 0.00 713.35 288.24 1868.99 00:09:51.236 [2024-11-18T06:43:44.324Z] =================================================================================================================== 00:09:51.236 [2024-11-18T06:43:44.324Z] Total : 178480.07 697.19 0.00 0.00 713.35 288.24 1868.99 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 631972 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 631975 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 631977 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:51.236 rmmod nvme_tcp 00:09:51.236 rmmod nvme_fabrics 00:09:51.236 rmmod nvme_keyring 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 631853 ']' 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 631853 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 631853 ']' 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 631853 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 631853 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 631853' 00:09:51.236 killing process with pid 631853 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 631853 00:09:51.236 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 631853 00:09:51.494 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:51.494 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:51.494 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:51.494 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:51.494 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:51.494 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:51.494 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:51.494 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:51.494 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:51.494 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.494 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.494 07:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.402 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:53.662 00:09:53.662 real 0m7.112s 00:09:53.662 user 0m15.063s 00:09:53.662 sys 0m3.682s 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.662 ************************************ 00:09:53.662 END TEST nvmf_bdev_io_wait 00:09:53.662 ************************************ 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:53.662 ************************************ 00:09:53.662 START TEST nvmf_queue_depth 00:09:53.662 ************************************ 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:53.662 * Looking for test storage... 00:09:53.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:53.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.662 --rc genhtml_branch_coverage=1 00:09:53.662 --rc genhtml_function_coverage=1 00:09:53.662 --rc genhtml_legend=1 00:09:53.662 --rc geninfo_all_blocks=1 00:09:53.662 --rc geninfo_unexecuted_blocks=1 00:09:53.662 00:09:53.662 ' 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:53.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.662 --rc genhtml_branch_coverage=1 00:09:53.662 --rc genhtml_function_coverage=1 00:09:53.662 --rc genhtml_legend=1 00:09:53.662 --rc geninfo_all_blocks=1 00:09:53.662 --rc geninfo_unexecuted_blocks=1 00:09:53.662 00:09:53.662 ' 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:53.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.662 --rc genhtml_branch_coverage=1 00:09:53.662 --rc genhtml_function_coverage=1 00:09:53.662 --rc genhtml_legend=1 00:09:53.662 --rc geninfo_all_blocks=1 00:09:53.662 --rc geninfo_unexecuted_blocks=1 00:09:53.662 00:09:53.662 ' 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:53.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.662 --rc genhtml_branch_coverage=1 00:09:53.662 --rc genhtml_function_coverage=1 00:09:53.662 --rc genhtml_legend=1 00:09:53.662 --rc geninfo_all_blocks=1 00:09:53.662 --rc geninfo_unexecuted_blocks=1 00:09:53.662 00:09:53.662 ' 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.662 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:53.663 07:43:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:56.199 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:56.199 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:56.199 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:56.199 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:56.199 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:56.200 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:56.200 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:56.200 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:56.200 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.200 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.200 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.200 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:56.200 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.200 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.200 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:56.200 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:56.200 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.200 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.200 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:56.200 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:56.200 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.200 07:43:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:56.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:09:56.200 00:09:56.200 --- 10.0.0.2 ping statistics --- 00:09:56.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.200 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:56.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:09:56.200 00:09:56.200 --- 10.0.0.1 ping statistics --- 00:09:56.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.200 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=634212 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 634212 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 634212 ']' 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.200 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.200 [2024-11-18 07:43:49.219260] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:56.200 [2024-11-18 07:43:49.219349] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.458 [2024-11-18 07:43:49.298024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.458 [2024-11-18 07:43:49.344312] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.458 [2024-11-18 07:43:49.344376] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.458 [2024-11-18 07:43:49.344391] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.458 [2024-11-18 07:43:49.344402] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.458 [2024-11-18 07:43:49.344413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.458 [2024-11-18 07:43:49.344991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.458 [2024-11-18 07:43:49.484237] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.458 Malloc0 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.458 [2024-11-18 07:43:49.532587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=634236 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 634236 /var/tmp/bdevperf.sock 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 634236 ']' 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:56.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.458 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.716 [2024-11-18 07:43:49.580528] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:56.716 [2024-11-18 07:43:49.580595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid634236 ] 00:09:56.716 [2024-11-18 07:43:49.647099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.716 [2024-11-18 07:43:49.694648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.975 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.975 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:56.975 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:56.975 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.975 07:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.975 NVMe0n1 00:09:56.975 07:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.975 07:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:57.233 Running I/O for 10 seconds... 00:09:59.101 8051.00 IOPS, 31.45 MiB/s [2024-11-18T06:43:53.563Z] 8186.00 IOPS, 31.98 MiB/s [2024-11-18T06:43:54.498Z] 8194.33 IOPS, 32.01 MiB/s [2024-11-18T06:43:55.431Z] 8289.75 IOPS, 32.38 MiB/s [2024-11-18T06:43:56.366Z] 8366.20 IOPS, 32.68 MiB/s [2024-11-18T06:43:57.300Z] 8354.67 IOPS, 32.64 MiB/s [2024-11-18T06:43:58.235Z] 8334.43 IOPS, 32.56 MiB/s [2024-11-18T06:43:59.609Z] 8346.62 IOPS, 32.60 MiB/s [2024-11-18T06:44:00.544Z] 8394.44 IOPS, 32.79 MiB/s [2024-11-18T06:44:00.544Z] 8386.50 IOPS, 32.76 MiB/s 00:10:07.456 Latency(us) 00:10:07.456 [2024-11-18T06:44:00.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.456 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:07.456 Verification LBA range: start 0x0 length 0x4000 00:10:07.456 NVMe0n1 : 10.10 8399.85 32.81 0.00 0.00 121416.87 21456.97 71846.87 00:10:07.456 [2024-11-18T06:44:00.544Z] =================================================================================================================== 00:10:07.456 [2024-11-18T06:44:00.544Z] Total : 8399.85 32.81 0.00 0.00 121416.87 21456.97 71846.87 00:10:07.456 { 00:10:07.456 "results": [ 00:10:07.456 { 00:10:07.456 "job": "NVMe0n1", 00:10:07.456 "core_mask": "0x1", 00:10:07.456 "workload": "verify", 00:10:07.456 "status": "finished", 00:10:07.456 "verify_range": { 00:10:07.456 "start": 0, 00:10:07.456 "length": 16384 00:10:07.456 }, 00:10:07.456 "queue_depth": 1024, 00:10:07.456 "io_size": 4096, 00:10:07.456 "runtime": 10.104465, 00:10:07.456 "iops": 8399.85095697793, 00:10:07.456 "mibps": 32.81191780069504, 00:10:07.456 "io_failed": 0, 00:10:07.456 "io_timeout": 0, 00:10:07.456 "avg_latency_us": 121416.8679310122, 00:10:07.456 "min_latency_us": 21456.971851851853, 00:10:07.456 "max_latency_us": 71846.87407407408 00:10:07.456 } 00:10:07.456 ], 00:10:07.456 "core_count": 1 00:10:07.456 } 00:10:07.456 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 634236 00:10:07.456 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 634236 ']' 00:10:07.456 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 634236 00:10:07.456 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:07.456 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.456 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 634236 00:10:07.456 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.456 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.456 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 634236' 00:10:07.456 killing process with pid 634236 00:10:07.456 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 634236 00:10:07.456 Received shutdown signal, test time was about 10.000000 seconds 00:10:07.456 00:10:07.456 Latency(us) 00:10:07.456 [2024-11-18T06:44:00.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.456 [2024-11-18T06:44:00.544Z] =================================================================================================================== 00:10:07.456 [2024-11-18T06:44:00.544Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:07.456 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 634236 00:10:07.456 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:07.456 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:07.456 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:07.456 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:07.714 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:07.714 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:07.714 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:07.714 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:07.714 rmmod nvme_tcp 00:10:07.714 rmmod nvme_fabrics 00:10:07.714 rmmod nvme_keyring 00:10:07.714 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.714 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:07.714 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:07.714 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 634212 ']' 00:10:07.714 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 634212 00:10:07.714 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 634212 ']' 00:10:07.714 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 634212 00:10:07.714 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:07.714 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.714 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 634212 00:10:07.714 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:07.714 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:07.714 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 634212' 00:10:07.714 killing process with pid 634212 00:10:07.714 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 634212 00:10:07.715 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 634212 00:10:07.973 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:07.973 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:07.973 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:07.973 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:07.973 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:07.973 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:07.973 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:07.973 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:07.973 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:07.973 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.973 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.973 07:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.878 07:44:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:09.878 00:10:09.878 real 0m16.343s 00:10:09.878 user 0m21.715s 00:10:09.878 sys 0m3.726s 00:10:09.878 07:44:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.878 07:44:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.878 ************************************ 00:10:09.878 END TEST nvmf_queue_depth 00:10:09.878 ************************************ 00:10:09.878 07:44:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:09.878 07:44:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:09.878 07:44:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.878 07:44:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:09.878 ************************************ 00:10:09.878 START TEST nvmf_target_multipath 00:10:09.878 ************************************ 00:10:09.878 07:44:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:10.138 * Looking for test storage... 00:10:10.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.138 07:44:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:10.138 07:44:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:10.138 07:44:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:10.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.138 --rc genhtml_branch_coverage=1 00:10:10.138 --rc genhtml_function_coverage=1 00:10:10.138 --rc genhtml_legend=1 00:10:10.138 --rc geninfo_all_blocks=1 00:10:10.138 --rc geninfo_unexecuted_blocks=1 00:10:10.138 00:10:10.138 ' 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:10.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.138 --rc genhtml_branch_coverage=1 00:10:10.138 --rc genhtml_function_coverage=1 00:10:10.138 --rc genhtml_legend=1 00:10:10.138 --rc geninfo_all_blocks=1 00:10:10.138 --rc geninfo_unexecuted_blocks=1 00:10:10.138 00:10:10.138 ' 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:10.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.138 --rc genhtml_branch_coverage=1 00:10:10.138 --rc genhtml_function_coverage=1 00:10:10.138 --rc genhtml_legend=1 00:10:10.138 --rc geninfo_all_blocks=1 00:10:10.138 --rc geninfo_unexecuted_blocks=1 00:10:10.138 00:10:10.138 ' 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:10.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.138 --rc genhtml_branch_coverage=1 00:10:10.138 --rc genhtml_function_coverage=1 00:10:10.138 --rc genhtml_legend=1 00:10:10.138 --rc geninfo_all_blocks=1 00:10:10.138 --rc geninfo_unexecuted_blocks=1 00:10:10.138 00:10:10.138 ' 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.138 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:10.139 07:44:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:12.676 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:12.676 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:12.676 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:12.677 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:12.677 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:12.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:10:12.677 00:10:12.677 --- 10.0.0.2 ping statistics --- 00:10:12.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.677 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:10:12.677 00:10:12.677 --- 10.0.0.1 ping statistics --- 00:10:12.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.677 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:12.677 only one NIC for nvmf test 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:12.677 rmmod nvme_tcp 00:10:12.677 rmmod nvme_fabrics 00:10:12.677 rmmod nvme_keyring 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.677 07:44:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:14.587 00:10:14.587 real 0m4.547s 00:10:14.587 user 0m0.936s 00:10:14.587 sys 0m1.615s 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:14.587 ************************************ 00:10:14.587 END TEST nvmf_target_multipath 00:10:14.587 ************************************ 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:14.587 07:44:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:14.588 ************************************ 00:10:14.588 START TEST nvmf_zcopy 00:10:14.588 ************************************ 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:14.588 * Looking for test storage... 00:10:14.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:14.588 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:14.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.848 --rc genhtml_branch_coverage=1 00:10:14.848 --rc genhtml_function_coverage=1 00:10:14.848 --rc genhtml_legend=1 00:10:14.848 --rc geninfo_all_blocks=1 00:10:14.848 --rc geninfo_unexecuted_blocks=1 00:10:14.848 00:10:14.848 ' 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:14.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.848 --rc genhtml_branch_coverage=1 00:10:14.848 --rc genhtml_function_coverage=1 00:10:14.848 --rc genhtml_legend=1 00:10:14.848 --rc geninfo_all_blocks=1 00:10:14.848 --rc geninfo_unexecuted_blocks=1 00:10:14.848 00:10:14.848 ' 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:14.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.848 --rc genhtml_branch_coverage=1 00:10:14.848 --rc genhtml_function_coverage=1 00:10:14.848 --rc genhtml_legend=1 00:10:14.848 --rc geninfo_all_blocks=1 00:10:14.848 --rc geninfo_unexecuted_blocks=1 00:10:14.848 00:10:14.848 ' 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:14.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.848 --rc genhtml_branch_coverage=1 00:10:14.848 --rc genhtml_function_coverage=1 00:10:14.848 --rc genhtml_legend=1 00:10:14.848 --rc geninfo_all_blocks=1 00:10:14.848 --rc geninfo_unexecuted_blocks=1 00:10:14.848 00:10:14.848 ' 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.848 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:14.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:14.849 07:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:16.754 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:16.754 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:16.754 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:16.754 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.754 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:16.755 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:16.755 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.755 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:17.013 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:17.013 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:17.013 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:17.013 07:44:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:17.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:10:17.013 00:10:17.013 --- 10.0.0.2 ping statistics --- 00:10:17.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.013 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:17.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:10:17.013 00:10:17.013 --- 10.0.0.1 ping statistics --- 00:10:17.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.013 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=639451 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 639451 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 639451 ']' 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.013 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.272 [2024-11-18 07:44:10.103611] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:10:17.272 [2024-11-18 07:44:10.103697] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.272 [2024-11-18 07:44:10.178441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.272 [2024-11-18 07:44:10.225948] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.272 [2024-11-18 07:44:10.226010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.272 [2024-11-18 07:44:10.226023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.272 [2024-11-18 07:44:10.226034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.272 [2024-11-18 07:44:10.226044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.272 [2024-11-18 07:44:10.226658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.272 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.272 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:17.272 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:17.272 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:17.272 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.530 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.530 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:17.530 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:17.530 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.530 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.530 [2024-11-18 07:44:10.372347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.530 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.530 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:17.530 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.530 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.530 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.530 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.530 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.530 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.530 [2024-11-18 07:44:10.388600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.530 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.531 malloc0 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:17.531 { 00:10:17.531 "params": { 00:10:17.531 "name": "Nvme$subsystem", 00:10:17.531 "trtype": "$TEST_TRANSPORT", 00:10:17.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:17.531 "adrfam": "ipv4", 00:10:17.531 "trsvcid": "$NVMF_PORT", 00:10:17.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:17.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:17.531 "hdgst": ${hdgst:-false}, 00:10:17.531 "ddgst": ${ddgst:-false} 00:10:17.531 }, 00:10:17.531 "method": "bdev_nvme_attach_controller" 00:10:17.531 } 00:10:17.531 EOF 00:10:17.531 )") 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:17.531 07:44:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:17.531 "params": { 00:10:17.531 "name": "Nvme1", 00:10:17.531 "trtype": "tcp", 00:10:17.531 "traddr": "10.0.0.2", 00:10:17.531 "adrfam": "ipv4", 00:10:17.531 "trsvcid": "4420", 00:10:17.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:17.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:17.531 "hdgst": false, 00:10:17.531 "ddgst": false 00:10:17.531 }, 00:10:17.531 "method": "bdev_nvme_attach_controller" 00:10:17.531 }' 00:10:17.531 [2024-11-18 07:44:10.471185] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:10:17.531 [2024-11-18 07:44:10.471292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid639514 ] 00:10:17.531 [2024-11-18 07:44:10.542752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.531 [2024-11-18 07:44:10.588791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.097 Running I/O for 10 seconds... 00:10:19.968 5780.00 IOPS, 45.16 MiB/s [2024-11-18T06:44:13.990Z] 5836.50 IOPS, 45.60 MiB/s [2024-11-18T06:44:15.365Z] 5836.00 IOPS, 45.59 MiB/s [2024-11-18T06:44:16.355Z] 5834.75 IOPS, 45.58 MiB/s [2024-11-18T06:44:17.064Z] 5840.20 IOPS, 45.63 MiB/s [2024-11-18T06:44:17.998Z] 5850.50 IOPS, 45.71 MiB/s [2024-11-18T06:44:19.381Z] 5853.57 IOPS, 45.73 MiB/s [2024-11-18T06:44:20.314Z] 5854.75 IOPS, 45.74 MiB/s [2024-11-18T06:44:21.249Z] 5858.89 IOPS, 45.77 MiB/s [2024-11-18T06:44:21.249Z] 5859.20 IOPS, 45.77 MiB/s 00:10:28.161 Latency(us) 00:10:28.161 [2024-11-18T06:44:21.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:28.161 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:28.161 Verification LBA range: start 0x0 length 0x1000 00:10:28.161 Nvme1n1 : 10.01 5859.40 45.78 0.00 0.00 21785.34 664.46 30098.01 00:10:28.161 [2024-11-18T06:44:21.249Z] =================================================================================================================== 00:10:28.161 [2024-11-18T06:44:21.249Z] Total : 5859.40 45.78 0.00 0.00 21785.34 664.46 30098.01 00:10:28.161 07:44:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=640798 00:10:28.161 07:44:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:28.161 07:44:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:28.161 07:44:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:28.161 07:44:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:28.161 07:44:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:28.161 07:44:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:28.161 07:44:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:28.161 07:44:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:28.161 { 00:10:28.161 "params": { 00:10:28.161 "name": "Nvme$subsystem", 00:10:28.161 "trtype": "$TEST_TRANSPORT", 00:10:28.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:28.161 "adrfam": "ipv4", 00:10:28.161 "trsvcid": "$NVMF_PORT", 00:10:28.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:28.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:28.161 "hdgst": ${hdgst:-false}, 00:10:28.161 "ddgst": ${ddgst:-false} 00:10:28.161 }, 00:10:28.161 "method": "bdev_nvme_attach_controller" 00:10:28.161 } 00:10:28.161 EOF 00:10:28.161 )") 00:10:28.161 [2024-11-18 07:44:21.166927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.161 [2024-11-18 07:44:21.166968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.161 07:44:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:28.161 07:44:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:28.161 07:44:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:28.161 07:44:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:28.161 "params": { 00:10:28.161 "name": "Nvme1", 00:10:28.161 "trtype": "tcp", 00:10:28.161 "traddr": "10.0.0.2", 00:10:28.161 "adrfam": "ipv4", 00:10:28.161 "trsvcid": "4420", 00:10:28.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:28.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:28.161 "hdgst": false, 00:10:28.161 "ddgst": false 00:10:28.161 }, 00:10:28.161 "method": "bdev_nvme_attach_controller" 00:10:28.161 }' 00:10:28.161 [2024-11-18 07:44:21.174869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.161 [2024-11-18 07:44:21.174892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.161 [2024-11-18 07:44:21.182890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.161 [2024-11-18 07:44:21.182911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.161 [2024-11-18 07:44:21.190893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.161 [2024-11-18 07:44:21.190914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.161 [2024-11-18 07:44:21.198918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.161 [2024-11-18 07:44:21.198939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.161 [2024-11-18 07:44:21.206937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.161 [2024-11-18 07:44:21.206957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.161 [2024-11-18 07:44:21.208591] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:10:28.161 [2024-11-18 07:44:21.208650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid640798 ] 00:10:28.161 [2024-11-18 07:44:21.214959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.161 [2024-11-18 07:44:21.214979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.161 [2024-11-18 07:44:21.222980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.161 [2024-11-18 07:44:21.223000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.161 [2024-11-18 07:44:21.231002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.161 [2024-11-18 07:44:21.231021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.161 [2024-11-18 07:44:21.239021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.161 [2024-11-18 07:44:21.239047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.161 [2024-11-18 07:44:21.247045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.161 [2024-11-18 07:44:21.247066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.420 [2024-11-18 07:44:21.255066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.420 [2024-11-18 07:44:21.255087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.420 [2024-11-18 07:44:21.263088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.420 [2024-11-18 07:44:21.263110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.420 [2024-11-18 07:44:21.271108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.420 [2024-11-18 07:44:21.271128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.278174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.421 [2024-11-18 07:44:21.279130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.279150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.287183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.287220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.295195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.295231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.303196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.303217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.311213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.311233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.319239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.319260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.327232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.421 [2024-11-18 07:44:21.327257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.327275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.335278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.335298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.343319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.343349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.351352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.351388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.359373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.359408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.367398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.367450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.375414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.375451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.383434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.383509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.391458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.391516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.399452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.399488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.407521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.407559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.415544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.415580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.423561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.423589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.431576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.431598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.439603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.439624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.447631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.447659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.455668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.455694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.463674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.463698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.471692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.471716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.479727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.479750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.487746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.487784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.495781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.495802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.421 [2024-11-18 07:44:21.503806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.421 [2024-11-18 07:44:21.503827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.511825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.511859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.519846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.519866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.527862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.527884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.535895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.535922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.543909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.543930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.551933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.551955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.559954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.559974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.567977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.567999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.575998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.576019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.584017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.584037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.592038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.592058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.600062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.600082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.608083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.608126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.616104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.616125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.624127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.624149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.632154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.632179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 Running I/O for 5 seconds... 00:10:28.680 [2024-11-18 07:44:21.640171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.640193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.655557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.655587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.668045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.668074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.680257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.680285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.692284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.692313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.704625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.704653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.716604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.716642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.728624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.728653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.740694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.740722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.752737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.752766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.680 [2024-11-18 07:44:21.764920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.680 [2024-11-18 07:44:21.764949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.939 [2024-11-18 07:44:21.777053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.939 [2024-11-18 07:44:21.777081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.939 [2024-11-18 07:44:21.789430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.939 [2024-11-18 07:44:21.789459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.939 [2024-11-18 07:44:21.801525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.939 [2024-11-18 07:44:21.801555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.939 [2024-11-18 07:44:21.813832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.939 [2024-11-18 07:44:21.813860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.939 [2024-11-18 07:44:21.825745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.939 [2024-11-18 07:44:21.825774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.939 [2024-11-18 07:44:21.837921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.939 [2024-11-18 07:44:21.837949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.939 [2024-11-18 07:44:21.850229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.939 [2024-11-18 07:44:21.850257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.939 [2024-11-18 07:44:21.862566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.939 [2024-11-18 07:44:21.862596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.939 [2024-11-18 07:44:21.874559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.939 [2024-11-18 07:44:21.874588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.939 [2024-11-18 07:44:21.886713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.939 [2024-11-18 07:44:21.886756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.939 [2024-11-18 07:44:21.898887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.939 [2024-11-18 07:44:21.898917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.939 [2024-11-18 07:44:21.911169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.939 [2024-11-18 07:44:21.911197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.939 [2024-11-18 07:44:21.923361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.939 [2024-11-18 07:44:21.923404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.939 [2024-11-18 07:44:21.935675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.939 [2024-11-18 07:44:21.935704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.939 [2024-11-18 07:44:21.947948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.939 [2024-11-18 07:44:21.947975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.939 [2024-11-18 07:44:21.960263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.939 [2024-11-18 07:44:21.960290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.939 [2024-11-18 07:44:21.972894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.939 [2024-11-18 07:44:21.972921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.939 [2024-11-18 07:44:21.985301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.939 [2024-11-18 07:44:21.985328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.939 [2024-11-18 07:44:21.998071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.939 [2024-11-18 07:44:21.998098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.939 [2024-11-18 07:44:22.010215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.939 [2024-11-18 07:44:22.010243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.939 [2024-11-18 07:44:22.022540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.939 [2024-11-18 07:44:22.022567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.198 [2024-11-18 07:44:22.034880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.198 [2024-11-18 07:44:22.034907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.198 [2024-11-18 07:44:22.047586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.198 [2024-11-18 07:44:22.047614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.198 [2024-11-18 07:44:22.059680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.198 [2024-11-18 07:44:22.059709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.198 [2024-11-18 07:44:22.072445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.198 [2024-11-18 07:44:22.072474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.198 [2024-11-18 07:44:22.084760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.198 [2024-11-18 07:44:22.084788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.198 [2024-11-18 07:44:22.096330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.198 [2024-11-18 07:44:22.096357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.198 [2024-11-18 07:44:22.108463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.198 [2024-11-18 07:44:22.108513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.198 [2024-11-18 07:44:22.121299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.198 [2024-11-18 07:44:22.121325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.198 [2024-11-18 07:44:22.133688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.198 [2024-11-18 07:44:22.133716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.198 [2024-11-18 07:44:22.145762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.198 [2024-11-18 07:44:22.145791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.198 [2024-11-18 07:44:22.158089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.198 [2024-11-18 07:44:22.158118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.198 [2024-11-18 07:44:22.170430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.198 [2024-11-18 07:44:22.170457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.198 [2024-11-18 07:44:22.182617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.198 [2024-11-18 07:44:22.182644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.198 [2024-11-18 07:44:22.194579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.198 [2024-11-18 07:44:22.194622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.198 [2024-11-18 07:44:22.207220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.198 [2024-11-18 07:44:22.207245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.198 [2024-11-18 07:44:22.219541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.198 [2024-11-18 07:44:22.219568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.198 [2024-11-18 07:44:22.231355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.198 [2024-11-18 07:44:22.231381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.198 [2024-11-18 07:44:22.244079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.198 [2024-11-18 07:44:22.244105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.198 [2024-11-18 07:44:22.258309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.198 [2024-11-18 07:44:22.258337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.198 [2024-11-18 07:44:22.270331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.198 [2024-11-18 07:44:22.270358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.198 [2024-11-18 07:44:22.282488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.198 [2024-11-18 07:44:22.282523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.456 [2024-11-18 07:44:22.294605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.456 [2024-11-18 07:44:22.294633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.456 [2024-11-18 07:44:22.306938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.456 [2024-11-18 07:44:22.306964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.456 [2024-11-18 07:44:22.318897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.456 [2024-11-18 07:44:22.318924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.456 [2024-11-18 07:44:22.333526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.456 [2024-11-18 07:44:22.333555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.456 [2024-11-18 07:44:22.345278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.457 [2024-11-18 07:44:22.345304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.457 [2024-11-18 07:44:22.357212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.457 [2024-11-18 07:44:22.357239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.457 [2024-11-18 07:44:22.369149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.457 [2024-11-18 07:44:22.369191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.457 [2024-11-18 07:44:22.381368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.457 [2024-11-18 07:44:22.381394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.457 [2024-11-18 07:44:22.393884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.457 [2024-11-18 07:44:22.393926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.457 [2024-11-18 07:44:22.406067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.457 [2024-11-18 07:44:22.406109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.457 [2024-11-18 07:44:22.418781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.457 [2024-11-18 07:44:22.418807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.457 [2024-11-18 07:44:22.431141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.457 [2024-11-18 07:44:22.431168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.457 [2024-11-18 07:44:22.443836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.457 [2024-11-18 07:44:22.443863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.457 [2024-11-18 07:44:22.456641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.457 [2024-11-18 07:44:22.456669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.457 [2024-11-18 07:44:22.469105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.457 [2024-11-18 07:44:22.469132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.457 [2024-11-18 07:44:22.481516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.457 [2024-11-18 07:44:22.481544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.457 [2024-11-18 07:44:22.493619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.457 [2024-11-18 07:44:22.493647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.457 [2024-11-18 07:44:22.505964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.457 [2024-11-18 07:44:22.505991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.457 [2024-11-18 07:44:22.518362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.457 [2024-11-18 07:44:22.518388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.457 [2024-11-18 07:44:22.530674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.457 [2024-11-18 07:44:22.530702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.457 [2024-11-18 07:44:22.542621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.457 [2024-11-18 07:44:22.542649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-11-18 07:44:22.555249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-11-18 07:44:22.555276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-11-18 07:44:22.567660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-11-18 07:44:22.567702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-11-18 07:44:22.580248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-11-18 07:44:22.580275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-11-18 07:44:22.592262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-11-18 07:44:22.592289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-11-18 07:44:22.604203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-11-18 07:44:22.604230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-11-18 07:44:22.616671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-11-18 07:44:22.616698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-11-18 07:44:22.628690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-11-18 07:44:22.628719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-11-18 07:44:22.641106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-11-18 07:44:22.641133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 10273.00 IOPS, 80.26 MiB/s [2024-11-18T06:44:22.803Z] [2024-11-18 07:44:22.653428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-11-18 07:44:22.653456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-11-18 07:44:22.665789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-11-18 07:44:22.665817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-11-18 07:44:22.677699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-11-18 07:44:22.677727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-11-18 07:44:22.688730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-11-18 07:44:22.688758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-11-18 07:44:22.700509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-11-18 07:44:22.700538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-11-18 07:44:22.712783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-11-18 07:44:22.712825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-11-18 07:44:22.725099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-11-18 07:44:22.725126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-11-18 07:44:22.737152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-11-18 07:44:22.737180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-11-18 07:44:22.748851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-11-18 07:44:22.748878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-11-18 07:44:22.761189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-11-18 07:44:22.761216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-11-18 07:44:22.773517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-11-18 07:44:22.773546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-11-18 07:44:22.787553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-11-18 07:44:22.787583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-11-18 07:44:22.799756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-11-18 07:44:22.799785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-11-18 07:44:22.811927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-11-18 07:44:22.811954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-11-18 07:44:22.823875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-11-18 07:44:22.823902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-11-18 07:44:22.836536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-11-18 07:44:22.836564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-11-18 07:44:22.848877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-11-18 07:44:22.848903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-11-18 07:44:22.861292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-11-18 07:44:22.861319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-11-18 07:44:22.873323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-11-18 07:44:22.873362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-11-18 07:44:22.885610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-11-18 07:44:22.885638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-11-18 07:44:22.897676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-11-18 07:44:22.897703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-11-18 07:44:22.909880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-11-18 07:44:22.909907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-11-18 07:44:22.921965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-11-18 07:44:22.921991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-11-18 07:44:22.933947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-11-18 07:44:22.933975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-11-18 07:44:22.946115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-11-18 07:44:22.946143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-11-18 07:44:22.958160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-11-18 07:44:22.958188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-11-18 07:44:22.970287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-11-18 07:44:22.970316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-11-18 07:44:22.982082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-11-18 07:44:22.982110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-11-18 07:44:22.994247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-11-18 07:44:22.994274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-11-18 07:44:23.006206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-11-18 07:44:23.006234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-11-18 07:44:23.018592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-11-18 07:44:23.018620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-11-18 07:44:23.030994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-11-18 07:44:23.031021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-11-18 07:44:23.042645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-11-18 07:44:23.042674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-11-18 07:44:23.054418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-11-18 07:44:23.054445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.232 [2024-11-18 07:44:23.066261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.232 [2024-11-18 07:44:23.066288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.232 [2024-11-18 07:44:23.078803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.232 [2024-11-18 07:44:23.078846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.232 [2024-11-18 07:44:23.091098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.232 [2024-11-18 07:44:23.091125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.232 [2024-11-18 07:44:23.103419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.232 [2024-11-18 07:44:23.103459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.232 [2024-11-18 07:44:23.115406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.232 [2024-11-18 07:44:23.115433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.232 [2024-11-18 07:44:23.127373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.232 [2024-11-18 07:44:23.127400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.232 [2024-11-18 07:44:23.139165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.232 [2024-11-18 07:44:23.139193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.232 [2024-11-18 07:44:23.151066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.232 [2024-11-18 07:44:23.151093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.232 [2024-11-18 07:44:23.163137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.232 [2024-11-18 07:44:23.163165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.232 [2024-11-18 07:44:23.175035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.232 [2024-11-18 07:44:23.175063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.232 [2024-11-18 07:44:23.187009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.232 [2024-11-18 07:44:23.187037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.232 [2024-11-18 07:44:23.199246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.232 [2024-11-18 07:44:23.199274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.232 [2024-11-18 07:44:23.211905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.232 [2024-11-18 07:44:23.211933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.232 [2024-11-18 07:44:23.223941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.232 [2024-11-18 07:44:23.223967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.232 [2024-11-18 07:44:23.236094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.232 [2024-11-18 07:44:23.236121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.232 [2024-11-18 07:44:23.248698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.232 [2024-11-18 07:44:23.248726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.232 [2024-11-18 07:44:23.260949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.232 [2024-11-18 07:44:23.260976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.232 [2024-11-18 07:44:23.273533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.232 [2024-11-18 07:44:23.273561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.232 [2024-11-18 07:44:23.285365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.232 [2024-11-18 07:44:23.285393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.232 [2024-11-18 07:44:23.297428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.232 [2024-11-18 07:44:23.297455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.232 [2024-11-18 07:44:23.309449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.232 [2024-11-18 07:44:23.309499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.321879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.321907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.333590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.333631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.345344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.345371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.357836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.357863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.370196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.370223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.382594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.382622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.394839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.394866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.406803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.406831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.418764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.418806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.430598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.430626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.442594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.442622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.454547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.454574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.466680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.466709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.478834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.478861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.491429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.491456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.503719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.503747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.516164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.516191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.528428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.528454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.541139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.541165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.554052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.554079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.565866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.565893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.491 [2024-11-18 07:44:23.577801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.491 [2024-11-18 07:44:23.577829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.749 [2024-11-18 07:44:23.589532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.749 [2024-11-18 07:44:23.589575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.749 [2024-11-18 07:44:23.601934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.749 [2024-11-18 07:44:23.601976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.749 [2024-11-18 07:44:23.613878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.749 [2024-11-18 07:44:23.613904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.749 [2024-11-18 07:44:23.626324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.749 [2024-11-18 07:44:23.626352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.749 [2024-11-18 07:44:23.638356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.749 [2024-11-18 07:44:23.638383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.749 10377.50 IOPS, 81.07 MiB/s [2024-11-18T06:44:23.837Z] [2024-11-18 07:44:23.650308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.749 [2024-11-18 07:44:23.650335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.749 [2024-11-18 07:44:23.662391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.749 [2024-11-18 07:44:23.662418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.749 [2024-11-18 07:44:23.675548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.749 [2024-11-18 07:44:23.675575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.749 [2024-11-18 07:44:23.686358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.749 [2024-11-18 07:44:23.686385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.749 [2024-11-18 07:44:23.699280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.749 [2024-11-18 07:44:23.699321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.749 [2024-11-18 07:44:23.711855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.749 [2024-11-18 07:44:23.711882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.749 [2024-11-18 07:44:23.723831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.749 [2024-11-18 07:44:23.723859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.749 [2024-11-18 07:44:23.735648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.749 [2024-11-18 07:44:23.735677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.749 [2024-11-18 07:44:23.747962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.749 [2024-11-18 07:44:23.747989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.749 [2024-11-18 07:44:23.760361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.749 [2024-11-18 07:44:23.760388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.749 [2024-11-18 07:44:23.772332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.749 [2024-11-18 07:44:23.772360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.749 [2024-11-18 07:44:23.784549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.749 [2024-11-18 07:44:23.784577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.749 [2024-11-18 07:44:23.796865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.749 [2024-11-18 07:44:23.796893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.749 [2024-11-18 07:44:23.809387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.749 [2024-11-18 07:44:23.809415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.749 [2024-11-18 07:44:23.821219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.749 [2024-11-18 07:44:23.821248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.749 [2024-11-18 07:44:23.832974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.749 [2024-11-18 07:44:23.833003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.007 [2024-11-18 07:44:23.844935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.007 [2024-11-18 07:44:23.844962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.007 [2024-11-18 07:44:23.857329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.007 [2024-11-18 07:44:23.857357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.007 [2024-11-18 07:44:23.869547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.007 [2024-11-18 07:44:23.869576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.007 [2024-11-18 07:44:23.881890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.007 [2024-11-18 07:44:23.881918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.007 [2024-11-18 07:44:23.893671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.007 [2024-11-18 07:44:23.893699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.007 [2024-11-18 07:44:23.905133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.007 [2024-11-18 07:44:23.905161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.007 [2024-11-18 07:44:23.916954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.007 [2024-11-18 07:44:23.916983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.007 [2024-11-18 07:44:23.928927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.007 [2024-11-18 07:44:23.928954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.007 [2024-11-18 07:44:23.943013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.007 [2024-11-18 07:44:23.943041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.007 [2024-11-18 07:44:23.954814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.007 [2024-11-18 07:44:23.954841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.007 [2024-11-18 07:44:23.967227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.007 [2024-11-18 07:44:23.967254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.007 [2024-11-18 07:44:23.979169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.007 [2024-11-18 07:44:23.979196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.007 [2024-11-18 07:44:23.990910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.007 [2024-11-18 07:44:23.990937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.008 [2024-11-18 07:44:24.003342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.008 [2024-11-18 07:44:24.003369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.008 [2024-11-18 07:44:24.015407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.008 [2024-11-18 07:44:24.015444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.008 [2024-11-18 07:44:24.027536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.008 [2024-11-18 07:44:24.027563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.008 [2024-11-18 07:44:24.039335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.008 [2024-11-18 07:44:24.039362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.008 [2024-11-18 07:44:24.051798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.008 [2024-11-18 07:44:24.051840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.008 [2024-11-18 07:44:24.063794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.008 [2024-11-18 07:44:24.063822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.008 [2024-11-18 07:44:24.075950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.008 [2024-11-18 07:44:24.075977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.008 [2024-11-18 07:44:24.088444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.008 [2024-11-18 07:44:24.088488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.265 [2024-11-18 07:44:24.101038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.265 [2024-11-18 07:44:24.101066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.265 [2024-11-18 07:44:24.113052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.265 [2024-11-18 07:44:24.113080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.265 [2024-11-18 07:44:24.127306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.265 [2024-11-18 07:44:24.127334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.265 [2024-11-18 07:44:24.139219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.265 [2024-11-18 07:44:24.139247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.265 [2024-11-18 07:44:24.151301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.265 [2024-11-18 07:44:24.151329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-18 07:44:24.163185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-18 07:44:24.163213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-18 07:44:24.175018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-18 07:44:24.175061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-18 07:44:24.186077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-18 07:44:24.186106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-18 07:44:24.198322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-18 07:44:24.198350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-18 07:44:24.210707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-18 07:44:24.210736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-18 07:44:24.222550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-18 07:44:24.222582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-18 07:44:24.235040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-18 07:44:24.235067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-18 07:44:24.247338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-18 07:44:24.247378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-18 07:44:24.259768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-18 07:44:24.259810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-18 07:44:24.271857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-18 07:44:24.271885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-18 07:44:24.283879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-18 07:44:24.283906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-18 07:44:24.298175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-18 07:44:24.298202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-18 07:44:24.309990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-18 07:44:24.310017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-18 07:44:24.322396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-18 07:44:24.322423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-18 07:44:24.334123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-18 07:44:24.334150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-18 07:44:24.346386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-18 07:44:24.346427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.524 [2024-11-18 07:44:24.358575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.524 [2024-11-18 07:44:24.358603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.524 [2024-11-18 07:44:24.370933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.524 [2024-11-18 07:44:24.370960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.524 [2024-11-18 07:44:24.383503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.524 [2024-11-18 07:44:24.383531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.524 [2024-11-18 07:44:24.395335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.524 [2024-11-18 07:44:24.395362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.524 [2024-11-18 07:44:24.407279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.524 [2024-11-18 07:44:24.407307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.524 [2024-11-18 07:44:24.419548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.524 [2024-11-18 07:44:24.419576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.524 [2024-11-18 07:44:24.431678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.524 [2024-11-18 07:44:24.431706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.524 [2024-11-18 07:44:24.444164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.524 [2024-11-18 07:44:24.444191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.524 [2024-11-18 07:44:24.456288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.524 [2024-11-18 07:44:24.456315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.524 [2024-11-18 07:44:24.468511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.524 [2024-11-18 07:44:24.468539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.524 [2024-11-18 07:44:24.480658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.524 [2024-11-18 07:44:24.480700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.524 [2024-11-18 07:44:24.493450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.524 [2024-11-18 07:44:24.493502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.524 [2024-11-18 07:44:24.505367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.524 [2024-11-18 07:44:24.505393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.524 [2024-11-18 07:44:24.517120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.524 [2024-11-18 07:44:24.517147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.524 [2024-11-18 07:44:24.529064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.524 [2024-11-18 07:44:24.529092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.524 [2024-11-18 07:44:24.540987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.524 [2024-11-18 07:44:24.541013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.524 [2024-11-18 07:44:24.552990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.524 [2024-11-18 07:44:24.553031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.524 [2024-11-18 07:44:24.565386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.524 [2024-11-18 07:44:24.565427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.524 [2024-11-18 07:44:24.577500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.524 [2024-11-18 07:44:24.577527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.524 [2024-11-18 07:44:24.589594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.524 [2024-11-18 07:44:24.589622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.524 [2024-11-18 07:44:24.601367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.524 [2024-11-18 07:44:24.601411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 [2024-11-18 07:44:24.613227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.613254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 [2024-11-18 07:44:24.625070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.625097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 [2024-11-18 07:44:24.637227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.637254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 [2024-11-18 07:44:24.648575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.648603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 10421.00 IOPS, 81.41 MiB/s [2024-11-18T06:44:24.870Z] [2024-11-18 07:44:24.660672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.660701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 [2024-11-18 07:44:24.672566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.672594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 [2024-11-18 07:44:24.684531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.684558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 [2024-11-18 07:44:24.696914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.696940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 [2024-11-18 07:44:24.709304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.709331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 [2024-11-18 07:44:24.721601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.721629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 [2024-11-18 07:44:24.734116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.734143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 [2024-11-18 07:44:24.746752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.746780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 [2024-11-18 07:44:24.758761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.758790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 [2024-11-18 07:44:24.771114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.771141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 [2024-11-18 07:44:24.783687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.783715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 [2024-11-18 07:44:24.795400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.795428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 [2024-11-18 07:44:24.807739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.807767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 [2024-11-18 07:44:24.820850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.820892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 [2024-11-18 07:44:24.833042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.833069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 [2024-11-18 07:44:24.845415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.845442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 [2024-11-18 07:44:24.857660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.857688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.782 [2024-11-18 07:44:24.870108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.782 [2024-11-18 07:44:24.870149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.039 [2024-11-18 07:44:24.882642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.039 [2024-11-18 07:44:24.882684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.039 [2024-11-18 07:44:24.894921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.039 [2024-11-18 07:44:24.894948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.039 [2024-11-18 07:44:24.907308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.039 [2024-11-18 07:44:24.907335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.040 [2024-11-18 07:44:24.919690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.040 [2024-11-18 07:44:24.919718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.040 [2024-11-18 07:44:24.932195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.040 [2024-11-18 07:44:24.932223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.040 [2024-11-18 07:44:24.944679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.040 [2024-11-18 07:44:24.944708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.040 [2024-11-18 07:44:24.956554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.040 [2024-11-18 07:44:24.956582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.040 [2024-11-18 07:44:24.967593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.040 [2024-11-18 07:44:24.967621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.040 [2024-11-18 07:44:24.979523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.040 [2024-11-18 07:44:24.979560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.040 [2024-11-18 07:44:24.990713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.040 [2024-11-18 07:44:24.990741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.040 [2024-11-18 07:44:25.002709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.040 [2024-11-18 07:44:25.002737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.040 [2024-11-18 07:44:25.014698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.040 [2024-11-18 07:44:25.014725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.040 [2024-11-18 07:44:25.027394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.040 [2024-11-18 07:44:25.027421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.040 [2024-11-18 07:44:25.039576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.040 [2024-11-18 07:44:25.039604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.040 [2024-11-18 07:44:25.051487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.040 [2024-11-18 07:44:25.051524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.040 [2024-11-18 07:44:25.065244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.040 [2024-11-18 07:44:25.065272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.040 [2024-11-18 07:44:25.076307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.040 [2024-11-18 07:44:25.076337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.040 [2024-11-18 07:44:25.088795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.040 [2024-11-18 07:44:25.088823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.040 [2024-11-18 07:44:25.101073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.040 [2024-11-18 07:44:25.101101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.040 [2024-11-18 07:44:25.113342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.040 [2024-11-18 07:44:25.113369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.040 [2024-11-18 07:44:25.125261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.040 [2024-11-18 07:44:25.125288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.296 [2024-11-18 07:44:25.137712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.296 [2024-11-18 07:44:25.137740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.296 [2024-11-18 07:44:25.149627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.297 [2024-11-18 07:44:25.149654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.297 [2024-11-18 07:44:25.162119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.297 [2024-11-18 07:44:25.162146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.297 [2024-11-18 07:44:25.174548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.297 [2024-11-18 07:44:25.174588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.297 [2024-11-18 07:44:25.187073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.297 [2024-11-18 07:44:25.187100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.297 [2024-11-18 07:44:25.199727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.297 [2024-11-18 07:44:25.199756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.297 [2024-11-18 07:44:25.212460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.297 [2024-11-18 07:44:25.212511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.297 [2024-11-18 07:44:25.224845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.297 [2024-11-18 07:44:25.224873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.297 [2024-11-18 07:44:25.237252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.297 [2024-11-18 07:44:25.237279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.297 [2024-11-18 07:44:25.249330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.297 [2024-11-18 07:44:25.249357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.297 [2024-11-18 07:44:25.261792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.297 [2024-11-18 07:44:25.261835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.297 [2024-11-18 07:44:25.273842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.297 [2024-11-18 07:44:25.273869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.297 [2024-11-18 07:44:25.286053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.297 [2024-11-18 07:44:25.286080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.297 [2024-11-18 07:44:25.297909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.297 [2024-11-18 07:44:25.297937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.297 [2024-11-18 07:44:25.309903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.297 [2024-11-18 07:44:25.309930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.297 [2024-11-18 07:44:25.322558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.297 [2024-11-18 07:44:25.322586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.297 [2024-11-18 07:44:25.334703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.297 [2024-11-18 07:44:25.334730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.297 [2024-11-18 07:44:25.346733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.297 [2024-11-18 07:44:25.346762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.297 [2024-11-18 07:44:25.358532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.297 [2024-11-18 07:44:25.358560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.297 [2024-11-18 07:44:25.370705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.297 [2024-11-18 07:44:25.370733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.297 [2024-11-18 07:44:25.382800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.297 [2024-11-18 07:44:25.382828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.555 [2024-11-18 07:44:25.394874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.555 [2024-11-18 07:44:25.394901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.555 [2024-11-18 07:44:25.406926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.555 [2024-11-18 07:44:25.406952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.555 [2024-11-18 07:44:25.419285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.555 [2024-11-18 07:44:25.419312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.555 [2024-11-18 07:44:25.431690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.555 [2024-11-18 07:44:25.431718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.555 [2024-11-18 07:44:25.443798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.555 [2024-11-18 07:44:25.443839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.555 [2024-11-18 07:44:25.456138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.555 [2024-11-18 07:44:25.456164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.555 [2024-11-18 07:44:25.468508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.555 [2024-11-18 07:44:25.468535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.555 [2024-11-18 07:44:25.480684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.555 [2024-11-18 07:44:25.480712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.555 [2024-11-18 07:44:25.492742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.555 [2024-11-18 07:44:25.492770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.555 [2024-11-18 07:44:25.504930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.555 [2024-11-18 07:44:25.504958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.555 [2024-11-18 07:44:25.517169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.555 [2024-11-18 07:44:25.517196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.555 [2024-11-18 07:44:25.529218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.555 [2024-11-18 07:44:25.529246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.555 [2024-11-18 07:44:25.541184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.555 [2024-11-18 07:44:25.541211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.555 [2024-11-18 07:44:25.553250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.555 [2024-11-18 07:44:25.553292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.555 [2024-11-18 07:44:25.565039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.555 [2024-11-18 07:44:25.565065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.555 [2024-11-18 07:44:25.577220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.555 [2024-11-18 07:44:25.577247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.555 [2024-11-18 07:44:25.589064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.555 [2024-11-18 07:44:25.589091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.555 [2024-11-18 07:44:25.601131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.555 [2024-11-18 07:44:25.601158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.555 [2024-11-18 07:44:25.613241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.555 [2024-11-18 07:44:25.613268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.555 [2024-11-18 07:44:25.625522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.555 [2024-11-18 07:44:25.625563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.555 [2024-11-18 07:44:25.637302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.555 [2024-11-18 07:44:25.637329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.813 [2024-11-18 07:44:25.649582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.813 [2024-11-18 07:44:25.649609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.813 10423.75 IOPS, 81.44 MiB/s [2024-11-18T06:44:25.901Z] [2024-11-18 07:44:25.661847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.813 [2024-11-18 07:44:25.661874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.813 [2024-11-18 07:44:25.672834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.813 [2024-11-18 07:44:25.672862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.813 [2024-11-18 07:44:25.685432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.813 [2024-11-18 07:44:25.685477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.813 [2024-11-18 07:44:25.698676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.813 [2024-11-18 07:44:25.698703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.813 [2024-11-18 07:44:25.711286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.813 [2024-11-18 07:44:25.711326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.813 [2024-11-18 07:44:25.723596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.813 [2024-11-18 07:44:25.723623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.813 [2024-11-18 07:44:25.735749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.813 [2024-11-18 07:44:25.735777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.813 [2024-11-18 07:44:25.748244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.813 [2024-11-18 07:44:25.748271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.813 [2024-11-18 07:44:25.760148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.813 [2024-11-18 07:44:25.760175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.813 [2024-11-18 07:44:25.772347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.813 [2024-11-18 07:44:25.772375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.813 [2024-11-18 07:44:25.784178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.813 [2024-11-18 07:44:25.784205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.813 [2024-11-18 07:44:25.796638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.813 [2024-11-18 07:44:25.796666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.813 [2024-11-18 07:44:25.808776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.813 [2024-11-18 07:44:25.808820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.813 [2024-11-18 07:44:25.820829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.813 [2024-11-18 07:44:25.820856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.814 [2024-11-18 07:44:25.832813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.814 [2024-11-18 07:44:25.832842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.814 [2024-11-18 07:44:25.844557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.814 [2024-11-18 07:44:25.844585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.814 [2024-11-18 07:44:25.856526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.814 [2024-11-18 07:44:25.856566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.814 [2024-11-18 07:44:25.868284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.814 [2024-11-18 07:44:25.868311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.814 [2024-11-18 07:44:25.880818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.814 [2024-11-18 07:44:25.880845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.814 [2024-11-18 07:44:25.892958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.814 [2024-11-18 07:44:25.892985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.072 [2024-11-18 07:44:25.905166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.072 [2024-11-18 07:44:25.905194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.072 [2024-11-18 07:44:25.917574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.072 [2024-11-18 07:44:25.917602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.072 [2024-11-18 07:44:25.929997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.072 [2024-11-18 07:44:25.930025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.072 [2024-11-18 07:44:25.941523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.072 [2024-11-18 07:44:25.941551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.072 [2024-11-18 07:44:25.953586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.072 [2024-11-18 07:44:25.953613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.072 [2024-11-18 07:44:25.966097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.072 [2024-11-18 07:44:25.966125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.072 [2024-11-18 07:44:25.978325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.072 [2024-11-18 07:44:25.978352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.072 [2024-11-18 07:44:25.990549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.072 [2024-11-18 07:44:25.990576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.072 [2024-11-18 07:44:26.002279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.072 [2024-11-18 07:44:26.002306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.072 [2024-11-18 07:44:26.014277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.072 [2024-11-18 07:44:26.014304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.072 [2024-11-18 07:44:26.026287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.072 [2024-11-18 07:44:26.026314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.072 [2024-11-18 07:44:26.038776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.072 [2024-11-18 07:44:26.038818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.072 [2024-11-18 07:44:26.051016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.072 [2024-11-18 07:44:26.051057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.072 [2024-11-18 07:44:26.063323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.072 [2024-11-18 07:44:26.063350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.073 [2024-11-18 07:44:26.075624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.073 [2024-11-18 07:44:26.075651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.073 [2024-11-18 07:44:26.088115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.073 [2024-11-18 07:44:26.088152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.073 [2024-11-18 07:44:26.100774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.073 [2024-11-18 07:44:26.100802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.073 [2024-11-18 07:44:26.113683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.073 [2024-11-18 07:44:26.113711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.073 [2024-11-18 07:44:26.126114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.073 [2024-11-18 07:44:26.126141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.073 [2024-11-18 07:44:26.138558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.073 [2024-11-18 07:44:26.138585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.073 [2024-11-18 07:44:26.150592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.073 [2024-11-18 07:44:26.150620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.162442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.162470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.174505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.174533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.186819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.186847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.198439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.198481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.210373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.210401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.222259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.222287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.234421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.234448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.246254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.246282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.258116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.258144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.270230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.270258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.282617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.282646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.294859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.294886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.306752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.306781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.319336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.319365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.331162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.331190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.343638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.343669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.355704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.355733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.367649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.367678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.379713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.379754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.391450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.391479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.403250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.403278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.331 [2024-11-18 07:44:26.415968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.331 [2024-11-18 07:44:26.415997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.590 [2024-11-18 07:44:26.428080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.590 [2024-11-18 07:44:26.428109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.590 [2024-11-18 07:44:26.440597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.590 [2024-11-18 07:44:26.440625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.590 [2024-11-18 07:44:26.452666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.590 [2024-11-18 07:44:26.452694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.590 [2024-11-18 07:44:26.464425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.590 [2024-11-18 07:44:26.464463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.590 [2024-11-18 07:44:26.476615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.590 [2024-11-18 07:44:26.476644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.590 [2024-11-18 07:44:26.488447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.590 [2024-11-18 07:44:26.488475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.590 [2024-11-18 07:44:26.500614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.590 [2024-11-18 07:44:26.500642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.590 [2024-11-18 07:44:26.512705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.590 [2024-11-18 07:44:26.512733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.590 [2024-11-18 07:44:26.524627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.590 [2024-11-18 07:44:26.524654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.590 [2024-11-18 07:44:26.536541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.590 [2024-11-18 07:44:26.536569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.590 [2024-11-18 07:44:26.548668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.590 [2024-11-18 07:44:26.548696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.590 [2024-11-18 07:44:26.560463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-11-18 07:44:26.560515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-11-18 07:44:26.574543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-11-18 07:44:26.574571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-11-18 07:44:26.586125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-11-18 07:44:26.586152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-11-18 07:44:26.598053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-11-18 07:44:26.598080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-11-18 07:44:26.610085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-11-18 07:44:26.610113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-11-18 07:44:26.622563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-11-18 07:44:26.622591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-11-18 07:44:26.634246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-11-18 07:44:26.634273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-11-18 07:44:26.646686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-11-18 07:44:26.646713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 10433.80 IOPS, 81.51 MiB/s [2024-11-18T06:44:26.679Z] [2024-11-18 07:44:26.658843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-11-18 07:44:26.658871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 00:10:33.591 Latency(us) 00:10:33.591 [2024-11-18T06:44:26.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:33.591 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:33.591 Nvme1n1 : 5.01 10437.01 81.54 0.00 0.00 12247.10 5364.24 23787.14 00:10:33.591 [2024-11-18T06:44:26.679Z] =================================================================================================================== 00:10:33.591 [2024-11-18T06:44:26.679Z] Total : 10437.01 81.54 0.00 0.00 12247.10 5364.24 23787.14 00:10:33.591 [2024-11-18 07:44:26.665215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-11-18 07:44:26.665238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-11-18 07:44:26.673244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-11-18 07:44:26.673267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.681281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.849 [2024-11-18 07:44:26.681316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.689332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.849 [2024-11-18 07:44:26.689383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.697356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.849 [2024-11-18 07:44:26.697407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.705371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.849 [2024-11-18 07:44:26.705441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.713395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.849 [2024-11-18 07:44:26.713442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.721411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.849 [2024-11-18 07:44:26.721460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.729443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.849 [2024-11-18 07:44:26.729497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.737460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.849 [2024-11-18 07:44:26.737513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.745483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.849 [2024-11-18 07:44:26.745535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.753515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.849 [2024-11-18 07:44:26.753563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.761537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.849 [2024-11-18 07:44:26.761590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.769554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.849 [2024-11-18 07:44:26.769604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.777578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.849 [2024-11-18 07:44:26.777626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.785596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.849 [2024-11-18 07:44:26.785643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.793618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.849 [2024-11-18 07:44:26.793664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.801643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.849 [2024-11-18 07:44:26.801693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.809639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.849 [2024-11-18 07:44:26.809671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.821716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.849 [2024-11-18 07:44:26.821775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.829723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.849 [2024-11-18 07:44:26.829770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.837751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.849 [2024-11-18 07:44:26.837799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.845727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.849 [2024-11-18 07:44:26.845749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.849 [2024-11-18 07:44:26.853748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.850 [2024-11-18 07:44:26.853784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.850 [2024-11-18 07:44:26.861772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.850 [2024-11-18 07:44:26.861815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (640798) - No such process 00:10:33.850 07:44:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 640798 00:10:33.850 07:44:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.850 07:44:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.850 07:44:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:33.850 07:44:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.850 07:44:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:33.850 07:44:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.850 07:44:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:33.850 delay0 00:10:33.850 07:44:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.850 07:44:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:33.850 07:44:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.850 07:44:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:33.850 07:44:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.850 07:44:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:34.107 [2024-11-18 07:44:26.984350] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:40.662 Initializing NVMe Controllers 00:10:40.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:40.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:40.662 Initialization complete. Launching workers. 00:10:40.662 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 80 00:10:40.662 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 367, failed to submit 33 00:10:40.662 success 191, unsuccessful 176, failed 0 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:40.662 rmmod nvme_tcp 00:10:40.662 rmmod nvme_fabrics 00:10:40.662 rmmod nvme_keyring 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 639451 ']' 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 639451 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 639451 ']' 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 639451 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 639451 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 639451' 00:10:40.662 killing process with pid 639451 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 639451 00:10:40.662 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 639451 00:10:40.663 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:40.663 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:40.663 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:40.663 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:40.663 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:40.663 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:40.663 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:40.663 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:40.663 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:40.663 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.663 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.663 07:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.585 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:42.585 00:10:42.585 real 0m27.969s 00:10:42.585 user 0m40.568s 00:10:42.585 sys 0m8.385s 00:10:42.585 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.585 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.585 ************************************ 00:10:42.585 END TEST nvmf_zcopy 00:10:42.585 ************************************ 00:10:42.585 07:44:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:42.585 07:44:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:42.585 07:44:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.585 07:44:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:42.585 ************************************ 00:10:42.585 START TEST nvmf_nmic 00:10:42.585 ************************************ 00:10:42.585 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:42.585 * Looking for test storage... 00:10:42.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:42.585 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:42.585 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:42.585 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:42.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.844 --rc genhtml_branch_coverage=1 00:10:42.844 --rc genhtml_function_coverage=1 00:10:42.844 --rc genhtml_legend=1 00:10:42.844 --rc geninfo_all_blocks=1 00:10:42.844 --rc geninfo_unexecuted_blocks=1 00:10:42.844 00:10:42.844 ' 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:42.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.844 --rc genhtml_branch_coverage=1 00:10:42.844 --rc genhtml_function_coverage=1 00:10:42.844 --rc genhtml_legend=1 00:10:42.844 --rc geninfo_all_blocks=1 00:10:42.844 --rc geninfo_unexecuted_blocks=1 00:10:42.844 00:10:42.844 ' 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:42.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.844 --rc genhtml_branch_coverage=1 00:10:42.844 --rc genhtml_function_coverage=1 00:10:42.844 --rc genhtml_legend=1 00:10:42.844 --rc geninfo_all_blocks=1 00:10:42.844 --rc geninfo_unexecuted_blocks=1 00:10:42.844 00:10:42.844 ' 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:42.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.844 --rc genhtml_branch_coverage=1 00:10:42.844 --rc genhtml_function_coverage=1 00:10:42.844 --rc genhtml_legend=1 00:10:42.844 --rc geninfo_all_blocks=1 00:10:42.844 --rc geninfo_unexecuted_blocks=1 00:10:42.844 00:10:42.844 ' 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.844 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:42.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:42.845 07:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:45.379 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:45.379 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:45.379 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:45.379 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:45.379 07:44:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:45.379 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:45.379 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:45.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:10:45.380 00:10:45.380 --- 10.0.0.2 ping statistics --- 00:10:45.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.380 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:45.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:10:45.380 00:10:45.380 --- 10.0.0.1 ping statistics --- 00:10:45.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.380 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=644197 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 644197 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 644197 ']' 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.380 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.380 [2024-11-18 07:44:38.227413] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:10:45.380 [2024-11-18 07:44:38.227523] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.380 [2024-11-18 07:44:38.303374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:45.380 [2024-11-18 07:44:38.353701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.380 [2024-11-18 07:44:38.353777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.380 [2024-11-18 07:44:38.353791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.380 [2024-11-18 07:44:38.353802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.380 [2024-11-18 07:44:38.353811] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.380 [2024-11-18 07:44:38.355399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.380 [2024-11-18 07:44:38.355466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.380 [2024-11-18 07:44:38.355528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:45.380 [2024-11-18 07:44:38.355531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.638 [2024-11-18 07:44:38.505072] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.638 Malloc0 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.638 [2024-11-18 07:44:38.571589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.638 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:45.639 test case1: single bdev can't be used in multiple subsystems 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.639 [2024-11-18 07:44:38.595380] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:45.639 [2024-11-18 07:44:38.595410] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:45.639 [2024-11-18 07:44:38.595425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.639 request: 00:10:45.639 { 00:10:45.639 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:45.639 "namespace": { 00:10:45.639 "bdev_name": "Malloc0", 00:10:45.639 "no_auto_visible": false 00:10:45.639 }, 00:10:45.639 "method": "nvmf_subsystem_add_ns", 00:10:45.639 "req_id": 1 00:10:45.639 } 00:10:45.639 Got JSON-RPC error response 00:10:45.639 response: 00:10:45.639 { 00:10:45.639 "code": -32602, 00:10:45.639 "message": "Invalid parameters" 00:10:45.639 } 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:45.639 Adding namespace failed - expected result. 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:45.639 test case2: host connect to nvmf target in multiple paths 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.639 [2024-11-18 07:44:38.603530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.639 07:44:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:46.203 07:44:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:47.135 07:44:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:47.135 07:44:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:47.135 07:44:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:47.135 07:44:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:47.135 07:44:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:49.032 07:44:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:49.032 07:44:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:49.032 07:44:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:49.032 07:44:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:49.032 07:44:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:49.032 07:44:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:49.032 07:44:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:49.032 [global] 00:10:49.032 thread=1 00:10:49.032 invalidate=1 00:10:49.032 rw=write 00:10:49.032 time_based=1 00:10:49.032 runtime=1 00:10:49.032 ioengine=libaio 00:10:49.032 direct=1 00:10:49.032 bs=4096 00:10:49.032 iodepth=1 00:10:49.032 norandommap=0 00:10:49.032 numjobs=1 00:10:49.032 00:10:49.032 verify_dump=1 00:10:49.032 verify_backlog=512 00:10:49.032 verify_state_save=0 00:10:49.032 do_verify=1 00:10:49.032 verify=crc32c-intel 00:10:49.032 [job0] 00:10:49.032 filename=/dev/nvme0n1 00:10:49.032 Could not set queue depth (nvme0n1) 00:10:49.290 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.290 fio-3.35 00:10:49.290 Starting 1 thread 00:10:50.223 00:10:50.223 job0: (groupid=0, jobs=1): err= 0: pid=644717: Mon Nov 18 07:44:43 2024 00:10:50.223 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:50.223 slat (nsec): min=6780, max=34642, avg=11693.39, stdev=4865.52 00:10:50.223 clat (usec): min=177, max=634, avg=230.83, stdev=33.02 00:10:50.223 lat (usec): min=184, max=643, avg=242.52, stdev=34.43 00:10:50.223 clat percentiles (usec): 00:10:50.223 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:10:50.223 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:10:50.223 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 265], 00:10:50.223 | 99.00th=[ 314], 99.50th=[ 490], 99.90th=[ 594], 99.95th=[ 619], 00:10:50.223 | 99.99th=[ 635] 00:10:50.223 write: IOPS=2490, BW=9962KiB/s (10.2MB/s)(9972KiB/1001msec); 0 zone resets 00:10:50.223 slat (usec): min=8, max=29156, avg=28.87, stdev=583.63 00:10:50.223 clat (usec): min=130, max=309, avg=165.77, stdev=20.23 00:10:50.223 lat (usec): min=139, max=29336, avg=194.64, stdev=584.45 00:10:50.223 clat percentiles (usec): 00:10:50.223 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:10:50.223 | 30.00th=[ 153], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:10:50.223 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 196], 00:10:50.223 | 99.00th=[ 217], 99.50th=[ 243], 99.90th=[ 285], 99.95th=[ 302], 00:10:50.223 | 99.99th=[ 310] 00:10:50.223 bw ( KiB/s): min= 9552, max= 9552, per=95.88%, avg=9552.00, stdev= 0.00, samples=1 00:10:50.223 iops : min= 2388, max= 2388, avg=2388.00, stdev= 0.00, samples=1 00:10:50.223 lat (usec) : 250=93.15%, 500=6.67%, 750=0.18% 00:10:50.223 cpu : usr=5.90%, sys=7.90%, ctx=4543, majf=0, minf=1 00:10:50.223 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.223 issued rwts: total=2048,2493,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.223 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.223 00:10:50.223 Run status group 0 (all jobs): 00:10:50.223 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:10:50.223 WRITE: bw=9962KiB/s (10.2MB/s), 9962KiB/s-9962KiB/s (10.2MB/s-10.2MB/s), io=9972KiB (10.2MB), run=1001-1001msec 00:10:50.223 00:10:50.223 Disk stats (read/write): 00:10:50.223 nvme0n1: ios=2035/2048, merge=0/0, ticks=1443/316, in_queue=1759, util=98.60% 00:10:50.223 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:50.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:50.481 rmmod nvme_tcp 00:10:50.481 rmmod nvme_fabrics 00:10:50.481 rmmod nvme_keyring 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 644197 ']' 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 644197 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 644197 ']' 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 644197 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 644197 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 644197' 00:10:50.481 killing process with pid 644197 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 644197 00:10:50.481 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 644197 00:10:50.741 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:50.741 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:50.741 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:50.741 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:50.741 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:50.741 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:50.741 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:50.741 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:50.741 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:50.741 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.741 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.741 07:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.278 07:44:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:53.278 00:10:53.278 real 0m10.281s 00:10:53.278 user 0m22.838s 00:10:53.278 sys 0m2.665s 00:10:53.278 07:44:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.278 07:44:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.278 ************************************ 00:10:53.278 END TEST nvmf_nmic 00:10:53.278 ************************************ 00:10:53.278 07:44:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:53.278 07:44:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:53.278 07:44:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.278 07:44:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:53.278 ************************************ 00:10:53.278 START TEST nvmf_fio_target 00:10:53.278 ************************************ 00:10:53.278 07:44:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:53.278 * Looking for test storage... 00:10:53.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.278 07:44:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:53.278 07:44:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:53.278 07:44:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:53.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.278 --rc genhtml_branch_coverage=1 00:10:53.278 --rc genhtml_function_coverage=1 00:10:53.278 --rc genhtml_legend=1 00:10:53.278 --rc geninfo_all_blocks=1 00:10:53.278 --rc geninfo_unexecuted_blocks=1 00:10:53.278 00:10:53.278 ' 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:53.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.278 --rc genhtml_branch_coverage=1 00:10:53.278 --rc genhtml_function_coverage=1 00:10:53.278 --rc genhtml_legend=1 00:10:53.278 --rc geninfo_all_blocks=1 00:10:53.278 --rc geninfo_unexecuted_blocks=1 00:10:53.278 00:10:53.278 ' 00:10:53.278 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:53.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.278 --rc genhtml_branch_coverage=1 00:10:53.278 --rc genhtml_function_coverage=1 00:10:53.278 --rc genhtml_legend=1 00:10:53.278 --rc geninfo_all_blocks=1 00:10:53.278 --rc geninfo_unexecuted_blocks=1 00:10:53.278 00:10:53.279 ' 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:53.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.279 --rc genhtml_branch_coverage=1 00:10:53.279 --rc genhtml_function_coverage=1 00:10:53.279 --rc genhtml_legend=1 00:10:53.279 --rc geninfo_all_blocks=1 00:10:53.279 --rc geninfo_unexecuted_blocks=1 00:10:53.279 00:10:53.279 ' 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:53.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:53.279 07:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:55.184 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:55.184 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:55.184 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:55.184 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:55.184 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:55.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:10:55.443 00:10:55.443 --- 10.0.0.2 ping statistics --- 00:10:55.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.443 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:10:55.443 00:10:55.443 --- 10.0.0.1 ping statistics --- 00:10:55.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.443 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=646926 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 646926 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 646926 ']' 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.443 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.443 [2024-11-18 07:44:48.443616] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:10:55.443 [2024-11-18 07:44:48.443703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.444 [2024-11-18 07:44:48.518280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.702 [2024-11-18 07:44:48.568346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.702 [2024-11-18 07:44:48.568407] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.702 [2024-11-18 07:44:48.568427] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.702 [2024-11-18 07:44:48.568439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.702 [2024-11-18 07:44:48.568448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.702 [2024-11-18 07:44:48.570028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.702 [2024-11-18 07:44:48.570082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.702 [2024-11-18 07:44:48.570138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.702 [2024-11-18 07:44:48.570141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.702 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.702 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:55.702 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:55.702 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:55.702 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.702 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.702 07:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:55.960 [2024-11-18 07:44:49.001954] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.960 07:44:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:56.526 07:44:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:56.526 07:44:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:56.783 07:44:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:56.783 07:44:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:57.041 07:44:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:57.041 07:44:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:57.299 07:44:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:57.299 07:44:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:57.556 07:44:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:57.814 07:44:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:57.814 07:44:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.073 07:44:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:58.073 07:44:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.331 07:44:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:58.331 07:44:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:58.590 07:44:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:58.876 07:44:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:58.876 07:44:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:59.158 07:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:59.158 07:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:59.416 07:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.674 [2024-11-18 07:44:52.688684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.674 07:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:59.932 07:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:00.190 07:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:01.123 07:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:01.124 07:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:01.124 07:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:01.124 07:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:01.124 07:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:01.124 07:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:03.066 07:44:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:03.066 07:44:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:03.066 07:44:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:03.066 07:44:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:03.066 07:44:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:03.066 07:44:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:03.066 07:44:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:03.066 [global] 00:11:03.066 thread=1 00:11:03.066 invalidate=1 00:11:03.066 rw=write 00:11:03.066 time_based=1 00:11:03.066 runtime=1 00:11:03.066 ioengine=libaio 00:11:03.066 direct=1 00:11:03.066 bs=4096 00:11:03.066 iodepth=1 00:11:03.066 norandommap=0 00:11:03.066 numjobs=1 00:11:03.066 00:11:03.066 verify_dump=1 00:11:03.066 verify_backlog=512 00:11:03.066 verify_state_save=0 00:11:03.066 do_verify=1 00:11:03.066 verify=crc32c-intel 00:11:03.066 [job0] 00:11:03.066 filename=/dev/nvme0n1 00:11:03.066 [job1] 00:11:03.066 filename=/dev/nvme0n2 00:11:03.066 [job2] 00:11:03.066 filename=/dev/nvme0n3 00:11:03.066 [job3] 00:11:03.066 filename=/dev/nvme0n4 00:11:03.066 Could not set queue depth (nvme0n1) 00:11:03.066 Could not set queue depth (nvme0n2) 00:11:03.066 Could not set queue depth (nvme0n3) 00:11:03.066 Could not set queue depth (nvme0n4) 00:11:03.324 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.324 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.324 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.324 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.324 fio-3.35 00:11:03.324 Starting 4 threads 00:11:04.698 00:11:04.698 job0: (groupid=0, jobs=1): err= 0: pid=648011: Mon Nov 18 07:44:57 2024 00:11:04.698 read: IOPS=294, BW=1179KiB/s (1207kB/s)(1224KiB/1038msec) 00:11:04.698 slat (nsec): min=7020, max=38471, avg=11224.60, stdev=6510.67 00:11:04.698 clat (usec): min=197, max=41955, avg=2948.42, stdev=10132.34 00:11:04.698 lat (usec): min=204, max=41990, avg=2959.65, stdev=10134.98 00:11:04.698 clat percentiles (usec): 00:11:04.698 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:11:04.698 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 245], 00:11:04.698 | 70.00th=[ 351], 80.00th=[ 388], 90.00th=[ 441], 95.00th=[41157], 00:11:04.698 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:04.698 | 99.99th=[42206] 00:11:04.698 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:11:04.698 slat (nsec): min=8738, max=52971, avg=14157.13, stdev=6862.67 00:11:04.698 clat (usec): min=150, max=390, avg=237.03, stdev=36.73 00:11:04.698 lat (usec): min=160, max=400, avg=251.19, stdev=35.35 00:11:04.698 clat percentiles (usec): 00:11:04.698 | 1.00th=[ 167], 5.00th=[ 182], 10.00th=[ 192], 20.00th=[ 208], 00:11:04.698 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 237], 60.00th=[ 243], 00:11:04.698 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 289], 95.00th=[ 306], 00:11:04.698 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 392], 99.95th=[ 392], 00:11:04.698 | 99.99th=[ 392] 00:11:04.698 bw ( KiB/s): min= 4096, max= 4096, per=16.03%, avg=4096.00, stdev= 0.00, samples=1 00:11:04.698 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:04.698 lat (usec) : 250=67.36%, 500=30.20% 00:11:04.698 lat (msec) : 50=2.44% 00:11:04.698 cpu : usr=1.06%, sys=1.06%, ctx=818, majf=0, minf=1 00:11:04.698 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.698 issued rwts: total=306,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.698 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.698 job1: (groupid=0, jobs=1): err= 0: pid=648012: Mon Nov 18 07:44:57 2024 00:11:04.698 read: IOPS=1533, BW=6135KiB/s (6283kB/s)(6160KiB/1004msec) 00:11:04.698 slat (nsec): min=7035, max=41358, avg=12891.32, stdev=4985.52 00:11:04.698 clat (usec): min=191, max=42014, avg=354.10, stdev=2106.39 00:11:04.698 lat (usec): min=200, max=42050, avg=366.99, stdev=2106.99 00:11:04.698 clat percentiles (usec): 00:11:04.698 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 221], 20.00th=[ 233], 00:11:04.698 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:11:04.698 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 265], 95.00th=[ 277], 00:11:04.698 | 99.00th=[ 310], 99.50th=[ 506], 99.90th=[41681], 99.95th=[42206], 00:11:04.698 | 99.99th=[42206] 00:11:04.698 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:11:04.698 slat (usec): min=7, max=1037, avg=16.13, stdev=27.84 00:11:04.698 clat (usec): min=128, max=469, avg=191.19, stdev=37.88 00:11:04.698 lat (usec): min=137, max=1253, avg=207.32, stdev=48.96 00:11:04.698 clat percentiles (usec): 00:11:04.698 | 1.00th=[ 139], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 161], 00:11:04.698 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 184], 60.00th=[ 192], 00:11:04.698 | 70.00th=[ 200], 80.00th=[ 215], 90.00th=[ 243], 95.00th=[ 265], 00:11:04.698 | 99.00th=[ 318], 99.50th=[ 343], 99.90th=[ 359], 99.95th=[ 371], 00:11:04.698 | 99.99th=[ 469] 00:11:04.698 bw ( KiB/s): min= 7240, max= 9144, per=32.06%, avg=8192.00, stdev=1346.33, samples=2 00:11:04.698 iops : min= 1810, max= 2286, avg=2048.00, stdev=336.58, samples=2 00:11:04.698 lat (usec) : 250=78.43%, 500=21.32%, 750=0.14% 00:11:04.698 lat (msec) : 50=0.11% 00:11:04.698 cpu : usr=4.19%, sys=6.28%, ctx=3591, majf=0, minf=1 00:11:04.698 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.699 issued rwts: total=1540,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.699 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.699 job2: (groupid=0, jobs=1): err= 0: pid=648013: Mon Nov 18 07:44:57 2024 00:11:04.699 read: IOPS=1512, BW=6051KiB/s (6196kB/s)(6160KiB/1018msec) 00:11:04.699 slat (nsec): min=4763, max=61794, avg=12149.41, stdev=5827.47 00:11:04.699 clat (usec): min=199, max=42025, avg=346.06, stdev=2087.91 00:11:04.699 lat (usec): min=205, max=42042, avg=358.21, stdev=2088.55 00:11:04.699 clat percentiles (usec): 00:11:04.699 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 223], 00:11:04.699 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 233], 60.00th=[ 239], 00:11:04.699 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 273], 95.00th=[ 281], 00:11:04.699 | 99.00th=[ 334], 99.50th=[ 371], 99.90th=[41157], 99.95th=[42206], 00:11:04.699 | 99.99th=[42206] 00:11:04.699 write: IOPS=2011, BW=8047KiB/s (8240kB/s)(8192KiB/1018msec); 0 zone resets 00:11:04.699 slat (usec): min=6, max=957, avg=14.48, stdev=21.66 00:11:04.699 clat (usec): min=158, max=1064, avg=206.73, stdev=46.66 00:11:04.699 lat (usec): min=165, max=1283, avg=221.22, stdev=52.15 00:11:04.699 clat percentiles (usec): 00:11:04.699 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 178], 00:11:04.699 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 200], 00:11:04.699 | 70.00th=[ 215], 80.00th=[ 237], 90.00th=[ 273], 95.00th=[ 281], 00:11:04.699 | 99.00th=[ 318], 99.50th=[ 334], 99.90th=[ 791], 99.95th=[ 988], 00:11:04.699 | 99.99th=[ 1057] 00:11:04.699 bw ( KiB/s): min= 7880, max= 8504, per=32.06%, avg=8192.00, stdev=441.23, samples=2 00:11:04.699 iops : min= 1970, max= 2126, avg=2048.00, stdev=110.31, samples=2 00:11:04.699 lat (usec) : 250=83.00%, 500=16.81%, 1000=0.06% 00:11:04.699 lat (msec) : 2=0.03%, 50=0.11% 00:11:04.699 cpu : usr=2.46%, sys=4.72%, ctx=3590, majf=0, minf=1 00:11:04.699 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.699 issued rwts: total=1540,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.699 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.699 job3: (groupid=0, jobs=1): err= 0: pid=648014: Mon Nov 18 07:44:57 2024 00:11:04.699 read: IOPS=1479, BW=5919KiB/s (6061kB/s)(6168KiB/1042msec) 00:11:04.699 slat (nsec): min=4721, max=52455, avg=11286.05, stdev=5916.28 00:11:04.699 clat (usec): min=182, max=42031, avg=384.83, stdev=2567.70 00:11:04.699 lat (usec): min=187, max=42067, avg=396.12, stdev=2568.82 00:11:04.699 clat percentiles (usec): 00:11:04.699 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 204], 00:11:04.699 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 215], 60.00th=[ 219], 00:11:04.699 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 249], 95.00th=[ 285], 00:11:04.699 | 99.00th=[ 367], 99.50th=[ 388], 99.90th=[41681], 99.95th=[42206], 00:11:04.699 | 99.99th=[42206] 00:11:04.699 write: IOPS=1965, BW=7862KiB/s (8050kB/s)(8192KiB/1042msec); 0 zone resets 00:11:04.699 slat (usec): min=6, max=23985, avg=24.62, stdev=530.04 00:11:04.699 clat (usec): min=134, max=1840, avg=179.96, stdev=52.66 00:11:04.699 lat (usec): min=140, max=24191, avg=204.58, stdev=533.48 00:11:04.699 clat percentiles (usec): 00:11:04.699 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:11:04.699 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:11:04.699 | 70.00th=[ 182], 80.00th=[ 194], 90.00th=[ 208], 95.00th=[ 221], 00:11:04.699 | 99.00th=[ 355], 99.50th=[ 396], 99.90th=[ 553], 99.95th=[ 955], 00:11:04.699 | 99.99th=[ 1844] 00:11:04.699 bw ( KiB/s): min= 6736, max= 9648, per=32.06%, avg=8192.00, stdev=2059.09, samples=2 00:11:04.699 iops : min= 1684, max= 2412, avg=2048.00, stdev=514.77, samples=2 00:11:04.699 lat (usec) : 250=94.07%, 500=5.65%, 750=0.03%, 1000=0.03% 00:11:04.699 lat (msec) : 2=0.03%, 4=0.03%, 50=0.17% 00:11:04.699 cpu : usr=2.31%, sys=4.51%, ctx=3593, majf=0, minf=1 00:11:04.699 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.699 issued rwts: total=1542,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.699 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.699 00:11:04.699 Run status group 0 (all jobs): 00:11:04.699 READ: bw=18.5MiB/s (19.4MB/s), 1179KiB/s-6135KiB/s (1207kB/s-6283kB/s), io=19.2MiB (20.2MB), run=1004-1042msec 00:11:04.699 WRITE: bw=25.0MiB/s (26.2MB/s), 1973KiB/s-8159KiB/s (2020kB/s-8355kB/s), io=26.0MiB (27.3MB), run=1004-1042msec 00:11:04.699 00:11:04.699 Disk stats (read/write): 00:11:04.699 nvme0n1: ios=325/512, merge=0/0, ticks=711/102, in_queue=813, util=86.17% 00:11:04.699 nvme0n2: ios=1589/2048, merge=0/0, ticks=547/365, in_queue=912, util=100.00% 00:11:04.699 nvme0n3: ios=1599/2026, merge=0/0, ticks=574/404, in_queue=978, util=97.59% 00:11:04.699 nvme0n4: ios=1612/2048, merge=0/0, ticks=608/355, in_queue=963, util=97.89% 00:11:04.699 07:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:04.699 [global] 00:11:04.699 thread=1 00:11:04.699 invalidate=1 00:11:04.699 rw=randwrite 00:11:04.699 time_based=1 00:11:04.699 runtime=1 00:11:04.699 ioengine=libaio 00:11:04.699 direct=1 00:11:04.699 bs=4096 00:11:04.699 iodepth=1 00:11:04.699 norandommap=0 00:11:04.699 numjobs=1 00:11:04.699 00:11:04.699 verify_dump=1 00:11:04.699 verify_backlog=512 00:11:04.699 verify_state_save=0 00:11:04.699 do_verify=1 00:11:04.699 verify=crc32c-intel 00:11:04.699 [job0] 00:11:04.699 filename=/dev/nvme0n1 00:11:04.699 [job1] 00:11:04.699 filename=/dev/nvme0n2 00:11:04.699 [job2] 00:11:04.699 filename=/dev/nvme0n3 00:11:04.699 [job3] 00:11:04.699 filename=/dev/nvme0n4 00:11:04.699 Could not set queue depth (nvme0n1) 00:11:04.699 Could not set queue depth (nvme0n2) 00:11:04.699 Could not set queue depth (nvme0n3) 00:11:04.699 Could not set queue depth (nvme0n4) 00:11:04.699 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.699 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.699 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.700 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.700 fio-3.35 00:11:04.700 Starting 4 threads 00:11:06.074 00:11:06.074 job0: (groupid=0, jobs=1): err= 0: pid=648245: Mon Nov 18 07:44:58 2024 00:11:06.074 read: IOPS=758, BW=3032KiB/s (3105kB/s)(3084KiB/1017msec) 00:11:06.074 slat (nsec): min=5440, max=48407, avg=11903.53, stdev=5942.89 00:11:06.074 clat (usec): min=172, max=42018, avg=992.28, stdev=5514.49 00:11:06.074 lat (usec): min=179, max=42031, avg=1004.18, stdev=5514.84 00:11:06.074 clat percentiles (usec): 00:11:06.074 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 202], 00:11:06.074 | 30.00th=[ 208], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 233], 00:11:06.074 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 330], 00:11:06.074 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:06.074 | 99.99th=[42206] 00:11:06.074 write: IOPS=1006, BW=4028KiB/s (4124kB/s)(4096KiB/1017msec); 0 zone resets 00:11:06.074 slat (nsec): min=7335, max=58928, avg=13504.81, stdev=7455.79 00:11:06.074 clat (usec): min=127, max=1079, avg=213.36, stdev=81.34 00:11:06.074 lat (usec): min=136, max=1118, avg=226.87, stdev=83.24 00:11:06.074 clat percentiles (usec): 00:11:06.074 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 149], 00:11:06.074 | 30.00th=[ 155], 40.00th=[ 167], 50.00th=[ 188], 60.00th=[ 239], 00:11:06.074 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 322], 00:11:06.074 | 99.00th=[ 465], 99.50th=[ 562], 99.90th=[ 988], 99.95th=[ 1074], 00:11:06.074 | 99.99th=[ 1074] 00:11:06.074 bw ( KiB/s): min= 8192, max= 8192, per=55.96%, avg=8192.00, stdev= 0.00, samples=1 00:11:06.074 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:06.074 lat (usec) : 250=67.02%, 500=31.64%, 750=0.39%, 1000=0.11% 00:11:06.074 lat (msec) : 2=0.06%, 50=0.78% 00:11:06.074 cpu : usr=1.97%, sys=2.76%, ctx=1798, majf=0, minf=1 00:11:06.074 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.074 issued rwts: total=771,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.074 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.074 job1: (groupid=0, jobs=1): err= 0: pid=648251: Mon Nov 18 07:44:58 2024 00:11:06.074 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:06.074 slat (nsec): min=4795, max=63329, avg=13024.75, stdev=6889.65 00:11:06.074 clat (usec): min=175, max=41085, avg=371.46, stdev=2322.60 00:11:06.074 lat (usec): min=180, max=41091, avg=384.48, stdev=2322.58 00:11:06.074 clat percentiles (usec): 00:11:06.074 | 1.00th=[ 184], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 212], 00:11:06.074 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 235], 00:11:06.074 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 302], 95.00th=[ 338], 00:11:06.074 | 99.00th=[ 383], 99.50th=[ 553], 99.90th=[41157], 99.95th=[41157], 00:11:06.074 | 99.99th=[41157] 00:11:06.074 write: IOPS=1760, BW=7041KiB/s (7210kB/s)(7048KiB/1001msec); 0 zone resets 00:11:06.074 slat (nsec): min=5901, max=62924, avg=14936.10, stdev=7219.24 00:11:06.074 clat (usec): min=122, max=1047, avg=209.51, stdev=70.14 00:11:06.074 lat (usec): min=131, max=1085, avg=224.45, stdev=69.61 00:11:06.074 clat percentiles (usec): 00:11:06.074 | 1.00th=[ 131], 5.00th=[ 143], 10.00th=[ 153], 20.00th=[ 163], 00:11:06.074 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 182], 60.00th=[ 200], 00:11:06.074 | 70.00th=[ 241], 80.00th=[ 260], 90.00th=[ 277], 95.00th=[ 310], 00:11:06.074 | 99.00th=[ 420], 99.50th=[ 490], 99.90th=[ 1037], 99.95th=[ 1045], 00:11:06.074 | 99.99th=[ 1045] 00:11:06.074 bw ( KiB/s): min= 4376, max= 4376, per=29.89%, avg=4376.00, stdev= 0.00, samples=1 00:11:06.074 iops : min= 1094, max= 1094, avg=1094.00, stdev= 0.00, samples=1 00:11:06.074 lat (usec) : 250=77.71%, 500=21.83%, 750=0.18%, 1000=0.06% 00:11:06.074 lat (msec) : 2=0.06%, 50=0.15% 00:11:06.074 cpu : usr=3.40%, sys=6.10%, ctx=3298, majf=0, minf=1 00:11:06.074 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.074 issued rwts: total=1536,1762,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.074 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.074 job2: (groupid=0, jobs=1): err= 0: pid=648252: Mon Nov 18 07:44:58 2024 00:11:06.074 read: IOPS=23, BW=92.4KiB/s (94.6kB/s)(96.0KiB/1039msec) 00:11:06.074 slat (nsec): min=7035, max=36909, avg=14575.33, stdev=5393.13 00:11:06.074 clat (usec): min=299, max=41518, avg=37626.89, stdev=11477.96 00:11:06.074 lat (usec): min=307, max=41555, avg=37641.46, stdev=11479.78 00:11:06.074 clat percentiles (usec): 00:11:06.074 | 1.00th=[ 302], 5.00th=[ 429], 10.00th=[40633], 20.00th=[41157], 00:11:06.074 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:06.074 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:06.074 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:06.074 | 99.99th=[41681] 00:11:06.074 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:11:06.074 slat (nsec): min=5812, max=44692, avg=11368.43, stdev=5859.25 00:11:06.074 clat (usec): min=146, max=446, avg=250.39, stdev=38.62 00:11:06.074 lat (usec): min=152, max=453, avg=261.76, stdev=37.44 00:11:06.074 clat percentiles (usec): 00:11:06.074 | 1.00th=[ 157], 5.00th=[ 180], 10.00th=[ 210], 20.00th=[ 233], 00:11:06.074 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:11:06.074 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 314], 00:11:06.074 | 99.00th=[ 379], 99.50th=[ 396], 99.90th=[ 449], 99.95th=[ 449], 00:11:06.074 | 99.99th=[ 449] 00:11:06.074 bw ( KiB/s): min= 4096, max= 4096, per=27.98%, avg=4096.00, stdev= 0.00, samples=1 00:11:06.074 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:06.074 lat (usec) : 250=49.63%, 500=46.27% 00:11:06.074 lat (msec) : 50=4.10% 00:11:06.074 cpu : usr=0.29%, sys=0.48%, ctx=536, majf=0, minf=1 00:11:06.074 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.074 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.074 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.074 job3: (groupid=0, jobs=1): err= 0: pid=648253: Mon Nov 18 07:44:58 2024 00:11:06.074 read: IOPS=21, BW=84.5KiB/s (86.6kB/s)(88.0KiB/1041msec) 00:11:06.074 slat (nsec): min=13667, max=18708, avg=14402.09, stdev=1044.16 00:11:06.074 clat (usec): min=40383, max=41020, avg=40954.46, stdev=129.87 00:11:06.074 lat (usec): min=40398, max=41034, avg=40968.86, stdev=129.59 00:11:06.074 clat percentiles (usec): 00:11:06.074 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:06.074 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:06.074 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:06.074 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:06.074 | 99.99th=[41157] 00:11:06.074 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:11:06.074 slat (nsec): min=6865, max=41240, avg=14128.79, stdev=5025.21 00:11:06.074 clat (usec): min=150, max=406, avg=247.95, stdev=25.80 00:11:06.074 lat (usec): min=158, max=426, avg=262.08, stdev=25.84 00:11:06.074 clat percentiles (usec): 00:11:06.074 | 1.00th=[ 169], 5.00th=[ 215], 10.00th=[ 229], 20.00th=[ 233], 00:11:06.074 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:11:06.074 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 285], 00:11:06.074 | 99.00th=[ 338], 99.50th=[ 363], 99.90th=[ 408], 99.95th=[ 408], 00:11:06.074 | 99.99th=[ 408] 00:11:06.074 bw ( KiB/s): min= 4096, max= 4096, per=27.98%, avg=4096.00, stdev= 0.00, samples=1 00:11:06.074 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:06.074 lat (usec) : 250=53.93%, 500=41.95% 00:11:06.074 lat (msec) : 50=4.12% 00:11:06.074 cpu : usr=0.29%, sys=0.77%, ctx=536, majf=0, minf=1 00:11:06.074 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.075 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.075 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.075 00:11:06.075 Run status group 0 (all jobs): 00:11:06.075 READ: bw=9041KiB/s (9258kB/s), 84.5KiB/s-6138KiB/s (86.6kB/s-6285kB/s), io=9412KiB (9638kB), run=1001-1041msec 00:11:06.075 WRITE: bw=14.3MiB/s (15.0MB/s), 1967KiB/s-7041KiB/s (2015kB/s-7210kB/s), io=14.9MiB (15.6MB), run=1001-1041msec 00:11:06.075 00:11:06.075 Disk stats (read/write): 00:11:06.075 nvme0n1: ios=819/1024, merge=0/0, ticks=977/203, in_queue=1180, util=98.10% 00:11:06.075 nvme0n2: ios=1106/1536, merge=0/0, ticks=457/308, in_queue=765, util=86.79% 00:11:06.075 nvme0n3: ios=65/512, merge=0/0, ticks=769/122, in_queue=891, util=91.97% 00:11:06.075 nvme0n4: ios=72/512, merge=0/0, ticks=1004/125, in_queue=1129, util=99.47% 00:11:06.075 07:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:06.075 [global] 00:11:06.075 thread=1 00:11:06.075 invalidate=1 00:11:06.075 rw=write 00:11:06.075 time_based=1 00:11:06.075 runtime=1 00:11:06.075 ioengine=libaio 00:11:06.075 direct=1 00:11:06.075 bs=4096 00:11:06.075 iodepth=128 00:11:06.075 norandommap=0 00:11:06.075 numjobs=1 00:11:06.075 00:11:06.075 verify_dump=1 00:11:06.075 verify_backlog=512 00:11:06.075 verify_state_save=0 00:11:06.075 do_verify=1 00:11:06.075 verify=crc32c-intel 00:11:06.075 [job0] 00:11:06.075 filename=/dev/nvme0n1 00:11:06.075 [job1] 00:11:06.075 filename=/dev/nvme0n2 00:11:06.075 [job2] 00:11:06.075 filename=/dev/nvme0n3 00:11:06.075 [job3] 00:11:06.075 filename=/dev/nvme0n4 00:11:06.075 Could not set queue depth (nvme0n1) 00:11:06.075 Could not set queue depth (nvme0n2) 00:11:06.075 Could not set queue depth (nvme0n3) 00:11:06.075 Could not set queue depth (nvme0n4) 00:11:06.075 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.075 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.075 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.075 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.075 fio-3.35 00:11:06.075 Starting 4 threads 00:11:07.465 00:11:07.465 job0: (groupid=0, jobs=1): err= 0: pid=648479: Mon Nov 18 07:45:00 2024 00:11:07.465 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:11:07.465 slat (usec): min=3, max=17842, avg=125.34, stdev=770.36 00:11:07.465 clat (usec): min=7324, max=56193, avg=15465.84, stdev=7470.78 00:11:07.465 lat (usec): min=7331, max=56211, avg=15591.18, stdev=7526.10 00:11:07.465 clat percentiles (usec): 00:11:07.465 | 1.00th=[ 8160], 5.00th=[ 9372], 10.00th=[10945], 20.00th=[11600], 00:11:07.465 | 30.00th=[12125], 40.00th=[12649], 50.00th=[12911], 60.00th=[13698], 00:11:07.465 | 70.00th=[15139], 80.00th=[16909], 90.00th=[22152], 95.00th=[33817], 00:11:07.465 | 99.00th=[50594], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 00:11:07.465 | 99.99th=[56361] 00:11:07.465 write: IOPS=3700, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1005msec); 0 zone resets 00:11:07.465 slat (usec): min=3, max=36429, avg=138.80, stdev=1007.01 00:11:07.465 clat (usec): min=4587, max=62067, avg=19359.20, stdev=11356.71 00:11:07.465 lat (usec): min=5277, max=62085, avg=19498.00, stdev=11449.27 00:11:07.465 clat percentiles (usec): 00:11:07.465 | 1.00th=[ 7308], 5.00th=[10290], 10.00th=[11207], 20.00th=[12256], 00:11:07.465 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[14222], 00:11:07.465 | 70.00th=[21103], 80.00th=[26870], 90.00th=[38011], 95.00th=[42730], 00:11:07.465 | 99.00th=[56361], 99.50th=[56361], 99.90th=[56886], 99.95th=[61080], 00:11:07.465 | 99.99th=[62129] 00:11:07.465 bw ( KiB/s): min= 8896, max=19896, per=21.42%, avg=14396.00, stdev=7778.17, samples=2 00:11:07.465 iops : min= 2224, max= 4974, avg=3599.00, stdev=1944.54, samples=2 00:11:07.465 lat (msec) : 10=5.79%, 20=71.85%, 50=20.48%, 100=1.88% 00:11:07.465 cpu : usr=4.88%, sys=6.57%, ctx=476, majf=0, minf=1 00:11:07.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:07.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.465 issued rwts: total=3584,3719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.466 job1: (groupid=0, jobs=1): err= 0: pid=648480: Mon Nov 18 07:45:00 2024 00:11:07.466 read: IOPS=4478, BW=17.5MiB/s (18.3MB/s)(18.2MiB/1043msec) 00:11:07.466 slat (usec): min=2, max=17334, avg=101.73, stdev=697.89 00:11:07.466 clat (usec): min=4117, max=59931, avg=13838.35, stdev=6464.58 00:11:07.466 lat (usec): min=4121, max=59944, avg=13940.08, stdev=6507.59 00:11:07.466 clat percentiles (usec): 00:11:07.466 | 1.00th=[ 5604], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[10945], 00:11:07.466 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12649], 60.00th=[12911], 00:11:07.466 | 70.00th=[13304], 80.00th=[14353], 90.00th=[18482], 95.00th=[22414], 00:11:07.466 | 99.00th=[59507], 99.50th=[59507], 99.90th=[60031], 99.95th=[60031], 00:11:07.466 | 99.99th=[60031] 00:11:07.466 write: IOPS=4908, BW=19.2MiB/s (20.1MB/s)(20.0MiB/1043msec); 0 zone resets 00:11:07.466 slat (usec): min=3, max=17675, avg=93.63, stdev=701.62 00:11:07.466 clat (usec): min=4282, max=78308, avg=13207.65, stdev=7650.88 00:11:07.466 lat (usec): min=4288, max=84285, avg=13301.28, stdev=7698.40 00:11:07.466 clat percentiles (usec): 00:11:07.466 | 1.00th=[ 6783], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[11076], 00:11:07.466 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:11:07.466 | 70.00th=[12911], 80.00th=[13435], 90.00th=[14746], 95.00th=[16712], 00:11:07.466 | 99.00th=[78119], 99.50th=[78119], 99.90th=[78119], 99.95th=[78119], 00:11:07.466 | 99.99th=[78119] 00:11:07.466 bw ( KiB/s): min=19584, max=20864, per=30.09%, avg=20224.00, stdev=905.10, samples=2 00:11:07.466 iops : min= 4896, max= 5216, avg=5056.00, stdev=226.27, samples=2 00:11:07.466 lat (msec) : 10=8.45%, 20=86.49%, 50=3.78%, 100=1.29% 00:11:07.466 cpu : usr=5.76%, sys=8.25%, ctx=252, majf=0, minf=1 00:11:07.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:07.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.466 issued rwts: total=4671,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.466 job2: (groupid=0, jobs=1): err= 0: pid=648481: Mon Nov 18 07:45:00 2024 00:11:07.466 read: IOPS=3794, BW=14.8MiB/s (15.5MB/s)(15.5MiB/1044msec) 00:11:07.466 slat (usec): min=2, max=20087, avg=132.36, stdev=898.02 00:11:07.466 clat (usec): min=5811, max=60289, avg=18195.95, stdev=9582.70 00:11:07.466 lat (usec): min=5819, max=80377, avg=18328.31, stdev=9647.00 00:11:07.466 clat percentiles (usec): 00:11:07.466 | 1.00th=[ 5932], 5.00th=[10552], 10.00th=[12518], 20.00th=[13173], 00:11:07.466 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14746], 60.00th=[16319], 00:11:07.466 | 70.00th=[18220], 80.00th=[19792], 90.00th=[26084], 95.00th=[39584], 00:11:07.466 | 99.00th=[60031], 99.50th=[60031], 99.90th=[60031], 99.95th=[60031], 00:11:07.466 | 99.99th=[60031] 00:11:07.466 write: IOPS=3923, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1044msec); 0 zone resets 00:11:07.466 slat (usec): min=3, max=10687, avg=107.94, stdev=597.58 00:11:07.466 clat (usec): min=4675, max=25128, avg=14717.99, stdev=2435.22 00:11:07.466 lat (usec): min=4681, max=25139, avg=14825.93, stdev=2472.76 00:11:07.466 clat percentiles (usec): 00:11:07.466 | 1.00th=[ 7373], 5.00th=[ 9896], 10.00th=[11731], 20.00th=[13304], 00:11:07.466 | 30.00th=[13960], 40.00th=[14484], 50.00th=[15139], 60.00th=[15401], 00:11:07.466 | 70.00th=[15664], 80.00th=[16057], 90.00th=[17171], 95.00th=[18482], 00:11:07.466 | 99.00th=[20579], 99.50th=[21103], 99.90th=[22938], 99.95th=[23725], 00:11:07.466 | 99.99th=[25035] 00:11:07.466 bw ( KiB/s): min=16384, max=16384, per=24.38%, avg=16384.00, stdev= 0.00, samples=2 00:11:07.466 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:11:07.466 lat (msec) : 10=4.53%, 20=85.24%, 50=8.46%, 100=1.76% 00:11:07.466 cpu : usr=2.30%, sys=6.81%, ctx=415, majf=0, minf=1 00:11:07.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:07.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.466 issued rwts: total=3961,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.466 job3: (groupid=0, jobs=1): err= 0: pid=648482: Mon Nov 18 07:45:00 2024 00:11:07.466 read: IOPS=4277, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1008msec) 00:11:07.466 slat (usec): min=2, max=13239, avg=109.99, stdev=822.85 00:11:07.466 clat (usec): min=5246, max=33986, avg=14893.37, stdev=4121.28 00:11:07.466 lat (usec): min=5254, max=33994, avg=15003.36, stdev=4176.15 00:11:07.466 clat percentiles (usec): 00:11:07.466 | 1.00th=[ 7308], 5.00th=[ 9503], 10.00th=[10552], 20.00th=[12125], 00:11:07.466 | 30.00th=[12518], 40.00th=[13304], 50.00th=[13829], 60.00th=[14222], 00:11:07.466 | 70.00th=[16581], 80.00th=[19006], 90.00th=[20317], 95.00th=[22414], 00:11:07.466 | 99.00th=[28443], 99.50th=[28443], 99.90th=[33817], 99.95th=[33817], 00:11:07.466 | 99.99th=[33817] 00:11:07.466 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:11:07.466 slat (usec): min=3, max=20158, avg=90.10, stdev=708.27 00:11:07.466 clat (usec): min=259, max=48611, avg=13826.51, stdev=6929.67 00:11:07.466 lat (usec): min=790, max=48623, avg=13916.61, stdev=6993.64 00:11:07.466 clat percentiles (usec): 00:11:07.466 | 1.00th=[ 2073], 5.00th=[ 5276], 10.00th=[ 6849], 20.00th=[ 9372], 00:11:07.466 | 30.00th=[11469], 40.00th=[12125], 50.00th=[13566], 60.00th=[14091], 00:11:07.466 | 70.00th=[14484], 80.00th=[15008], 90.00th=[24249], 95.00th=[25035], 00:11:07.466 | 99.00th=[43779], 99.50th=[46400], 99.90th=[47973], 99.95th=[48497], 00:11:07.466 | 99.99th=[48497] 00:11:07.466 bw ( KiB/s): min=17072, max=19792, per=27.42%, avg=18432.00, stdev=1923.33, samples=2 00:11:07.466 iops : min= 4268, max= 4948, avg=4608.00, stdev=480.83, samples=2 00:11:07.466 lat (usec) : 500=0.01%, 1000=0.10% 00:11:07.466 lat (msec) : 2=0.25%, 4=1.26%, 10=13.25%, 20=73.57%, 50=11.57% 00:11:07.466 cpu : usr=5.06%, sys=10.53%, ctx=409, majf=0, minf=1 00:11:07.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:07.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.466 issued rwts: total=4312,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.466 00:11:07.466 Run status group 0 (all jobs): 00:11:07.466 READ: bw=61.8MiB/s (64.8MB/s), 13.9MiB/s-17.5MiB/s (14.6MB/s-18.3MB/s), io=64.6MiB (67.7MB), run=1005-1044msec 00:11:07.466 WRITE: bw=65.6MiB/s (68.8MB/s), 14.5MiB/s-19.2MiB/s (15.2MB/s-20.1MB/s), io=68.5MiB (71.9MB), run=1005-1044msec 00:11:07.466 00:11:07.466 Disk stats (read/write): 00:11:07.466 nvme0n1: ios=2675/3072, merge=0/0, ticks=18706/33022, in_queue=51728, util=98.30% 00:11:07.466 nvme0n2: ios=4135/4505, merge=0/0, ticks=32088/32657, in_queue=64745, util=98.88% 00:11:07.466 nvme0n3: ios=3302/3584, merge=0/0, ticks=30756/29253, in_queue=60009, util=98.23% 00:11:07.466 nvme0n4: ios=3606/3871, merge=0/0, ticks=51948/52554, in_queue=104502, util=98.43% 00:11:07.466 07:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:07.466 [global] 00:11:07.466 thread=1 00:11:07.466 invalidate=1 00:11:07.466 rw=randwrite 00:11:07.466 time_based=1 00:11:07.466 runtime=1 00:11:07.466 ioengine=libaio 00:11:07.466 direct=1 00:11:07.466 bs=4096 00:11:07.466 iodepth=128 00:11:07.466 norandommap=0 00:11:07.466 numjobs=1 00:11:07.466 00:11:07.466 verify_dump=1 00:11:07.466 verify_backlog=512 00:11:07.466 verify_state_save=0 00:11:07.466 do_verify=1 00:11:07.466 verify=crc32c-intel 00:11:07.466 [job0] 00:11:07.466 filename=/dev/nvme0n1 00:11:07.466 [job1] 00:11:07.466 filename=/dev/nvme0n2 00:11:07.466 [job2] 00:11:07.466 filename=/dev/nvme0n3 00:11:07.466 [job3] 00:11:07.466 filename=/dev/nvme0n4 00:11:07.466 Could not set queue depth (nvme0n1) 00:11:07.466 Could not set queue depth (nvme0n2) 00:11:07.466 Could not set queue depth (nvme0n3) 00:11:07.466 Could not set queue depth (nvme0n4) 00:11:07.725 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.725 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.725 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.725 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.725 fio-3.35 00:11:07.725 Starting 4 threads 00:11:09.099 00:11:09.099 job0: (groupid=0, jobs=1): err= 0: pid=648869: Mon Nov 18 07:45:01 2024 00:11:09.099 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:11:09.099 slat (usec): min=2, max=16288, avg=117.52, stdev=787.56 00:11:09.099 clat (usec): min=7566, max=38779, avg=15714.20, stdev=5004.48 00:11:09.099 lat (usec): min=7571, max=38786, avg=15831.73, stdev=5067.89 00:11:09.099 clat percentiles (usec): 00:11:09.099 | 1.00th=[ 7635], 5.00th=[10028], 10.00th=[10814], 20.00th=[12125], 00:11:09.099 | 30.00th=[13173], 40.00th=[13698], 50.00th=[14353], 60.00th=[15008], 00:11:09.099 | 70.00th=[16057], 80.00th=[20317], 90.00th=[22414], 95.00th=[26346], 00:11:09.099 | 99.00th=[32113], 99.50th=[36439], 99.90th=[38536], 99.95th=[38536], 00:11:09.099 | 99.99th=[38536] 00:11:09.099 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:11:09.099 slat (usec): min=3, max=10147, avg=144.32, stdev=757.37 00:11:09.099 clat (usec): min=2343, max=49292, avg=19766.17, stdev=9670.60 00:11:09.099 lat (usec): min=4488, max=49301, avg=19910.49, stdev=9743.70 00:11:09.099 clat percentiles (usec): 00:11:09.099 | 1.00th=[ 8225], 5.00th=[ 8848], 10.00th=[10290], 20.00th=[10814], 00:11:09.099 | 30.00th=[11469], 40.00th=[15270], 50.00th=[19006], 60.00th=[22152], 00:11:09.099 | 70.00th=[23725], 80.00th=[24511], 90.00th=[34341], 95.00th=[41157], 00:11:09.099 | 99.00th=[45876], 99.50th=[46924], 99.90th=[49546], 99.95th=[49546], 00:11:09.099 | 99.99th=[49546] 00:11:09.099 bw ( KiB/s): min=13000, max=15672, per=25.20%, avg=14336.00, stdev=1889.39, samples=2 00:11:09.099 iops : min= 3250, max= 3918, avg=3584.00, stdev=472.35, samples=2 00:11:09.099 lat (msec) : 4=0.01%, 10=5.84%, 20=61.62%, 50=32.53% 00:11:09.099 cpu : usr=2.78%, sys=6.06%, ctx=335, majf=0, minf=1 00:11:09.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:09.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.099 issued rwts: total=3584,3591,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.099 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.099 job1: (groupid=0, jobs=1): err= 0: pid=648873: Mon Nov 18 07:45:01 2024 00:11:09.099 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:11:09.099 slat (usec): min=2, max=14182, avg=114.84, stdev=787.83 00:11:09.099 clat (usec): min=7818, max=58958, avg=14824.99, stdev=8129.69 00:11:09.099 lat (usec): min=7827, max=58968, avg=14939.83, stdev=8210.78 00:11:09.099 clat percentiles (usec): 00:11:09.099 | 1.00th=[ 7898], 5.00th=[10552], 10.00th=[10945], 20.00th=[11207], 00:11:09.099 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11863], 60.00th=[12911], 00:11:09.099 | 70.00th=[13173], 80.00th=[13698], 90.00th=[25297], 95.00th=[34341], 00:11:09.099 | 99.00th=[49021], 99.50th=[49546], 99.90th=[52691], 99.95th=[55837], 00:11:09.099 | 99.99th=[58983] 00:11:09.099 write: IOPS=4106, BW=16.0MiB/s (16.8MB/s)(16.2MiB/1010msec); 0 zone resets 00:11:09.099 slat (usec): min=2, max=24403, avg=119.21, stdev=882.90 00:11:09.099 clat (usec): min=5747, max=62980, avg=15706.31, stdev=8402.51 00:11:09.099 lat (usec): min=5758, max=63021, avg=15825.52, stdev=8461.09 00:11:09.099 clat percentiles (usec): 00:11:09.099 | 1.00th=[ 7898], 5.00th=[ 9241], 10.00th=[10814], 20.00th=[11207], 00:11:09.099 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863], 00:11:09.099 | 70.00th=[15270], 80.00th=[20579], 90.00th=[27919], 95.00th=[33817], 00:11:09.099 | 99.00th=[47449], 99.50th=[47449], 99.90th=[47449], 99.95th=[54264], 00:11:09.099 | 99.99th=[63177] 00:11:09.099 bw ( KiB/s): min=12520, max=20248, per=28.80%, avg=16384.00, stdev=5464.52, samples=2 00:11:09.099 iops : min= 3130, max= 5062, avg=4096.00, stdev=1366.13, samples=2 00:11:09.099 lat (msec) : 10=5.60%, 20=76.81%, 50=17.47%, 100=0.12% 00:11:09.099 cpu : usr=2.78%, sys=4.36%, ctx=308, majf=0, minf=1 00:11:09.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:09.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.099 issued rwts: total=4096,4148,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.099 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.099 job2: (groupid=0, jobs=1): err= 0: pid=648876: Mon Nov 18 07:45:01 2024 00:11:09.099 read: IOPS=2919, BW=11.4MiB/s (12.0MB/s)(12.0MiB/1048msec) 00:11:09.099 slat (usec): min=2, max=19958, avg=170.14, stdev=1124.31 00:11:09.099 clat (usec): min=9393, max=74984, avg=22061.02, stdev=13194.93 00:11:09.099 lat (usec): min=9401, max=88176, avg=22231.16, stdev=13282.18 00:11:09.099 clat percentiles (usec): 00:11:09.099 | 1.00th=[11600], 5.00th=[12780], 10.00th=[13566], 20.00th=[15795], 00:11:09.099 | 30.00th=[16319], 40.00th=[16712], 50.00th=[17433], 60.00th=[18482], 00:11:09.099 | 70.00th=[20579], 80.00th=[23200], 90.00th=[36439], 95.00th=[60031], 00:11:09.099 | 99.00th=[74974], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:11:09.099 | 99.99th=[74974] 00:11:09.099 write: IOPS=2931, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1048msec); 0 zone resets 00:11:09.099 slat (usec): min=3, max=12098, avg=144.53, stdev=685.80 00:11:09.099 clat (usec): min=2657, max=49943, avg=21313.54, stdev=9650.46 00:11:09.099 lat (usec): min=2661, max=49969, avg=21458.07, stdev=9723.71 00:11:09.099 clat percentiles (usec): 00:11:09.099 | 1.00th=[ 5276], 5.00th=[10814], 10.00th=[11338], 20.00th=[12911], 00:11:09.099 | 30.00th=[14746], 40.00th=[16057], 50.00th=[19006], 60.00th=[22938], 00:11:09.099 | 70.00th=[23987], 80.00th=[27657], 90.00th=[36963], 95.00th=[42730], 00:11:09.099 | 99.00th=[45876], 99.50th=[46924], 99.90th=[50070], 99.95th=[50070], 00:11:09.099 | 99.99th=[50070] 00:11:09.099 bw ( KiB/s): min=10248, max=14328, per=21.60%, avg=12288.00, stdev=2885.00, samples=2 00:11:09.099 iops : min= 2562, max= 3582, avg=3072.00, stdev=721.25, samples=2 00:11:09.099 lat (msec) : 4=0.26%, 10=0.49%, 20=58.64%, 50=37.26%, 100=3.34% 00:11:09.099 cpu : usr=2.48%, sys=4.58%, ctx=345, majf=0, minf=1 00:11:09.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:09.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.099 issued rwts: total=3060,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.099 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.099 job3: (groupid=0, jobs=1): err= 0: pid=648877: Mon Nov 18 07:45:01 2024 00:11:09.099 read: IOPS=3799, BW=14.8MiB/s (15.6MB/s)(15.0MiB/1010msec) 00:11:09.099 slat (usec): min=3, max=13502, avg=125.69, stdev=876.88 00:11:09.099 clat (usec): min=3267, max=39320, avg=15403.31, stdev=4832.36 00:11:09.099 lat (usec): min=5121, max=42098, avg=15528.99, stdev=4896.38 00:11:09.099 clat percentiles (usec): 00:11:09.099 | 1.00th=[ 8160], 5.00th=[10159], 10.00th=[10683], 20.00th=[12387], 00:11:09.099 | 30.00th=[12649], 40.00th=[13042], 50.00th=[14353], 60.00th=[15533], 00:11:09.099 | 70.00th=[16319], 80.00th=[18482], 90.00th=[21365], 95.00th=[23725], 00:11:09.099 | 99.00th=[39060], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:11:09.099 | 99.99th=[39060] 00:11:09.099 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:11:09.099 slat (usec): min=4, max=19628, avg=118.32, stdev=683.82 00:11:09.099 clat (usec): min=3647, max=55333, avg=16867.97, stdev=8466.05 00:11:09.099 lat (usec): min=3658, max=55338, avg=16986.28, stdev=8526.60 00:11:09.099 clat percentiles (usec): 00:11:09.099 | 1.00th=[ 5342], 5.00th=[ 8029], 10.00th=[10421], 20.00th=[12780], 00:11:09.099 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[13829], 00:11:09.099 | 70.00th=[15401], 80.00th=[21890], 90.00th=[27132], 95.00th=[34341], 00:11:09.099 | 99.00th=[48497], 99.50th=[50594], 99.90th=[52167], 99.95th=[52167], 00:11:09.099 | 99.99th=[55313] 00:11:09.099 bw ( KiB/s): min=13456, max=19312, per=28.80%, avg=16384.00, stdev=4140.82, samples=2 00:11:09.099 iops : min= 3364, max= 4828, avg=4096.00, stdev=1035.20, samples=2 00:11:09.099 lat (msec) : 4=0.19%, 10=6.32%, 20=74.46%, 50=18.76%, 100=0.28% 00:11:09.099 cpu : usr=3.77%, sys=5.95%, ctx=467, majf=0, minf=1 00:11:09.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:09.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.099 issued rwts: total=3837,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.099 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.099 00:11:09.099 Run status group 0 (all jobs): 00:11:09.099 READ: bw=54.3MiB/s (57.0MB/s), 11.4MiB/s-15.8MiB/s (12.0MB/s-16.6MB/s), io=56.9MiB (59.7MB), run=1007-1048msec 00:11:09.099 WRITE: bw=55.6MiB/s (58.3MB/s), 11.5MiB/s-16.0MiB/s (12.0MB/s-16.8MB/s), io=58.2MiB (61.1MB), run=1007-1048msec 00:11:09.099 00:11:09.099 Disk stats (read/write): 00:11:09.099 nvme0n1: ios=2695/3072, merge=0/0, ticks=21101/31415, in_queue=52516, util=98.00% 00:11:09.099 nvme0n2: ios=3632/3903, merge=0/0, ticks=16980/23955, in_queue=40935, util=90.56% 00:11:09.099 nvme0n3: ios=2590/2834, merge=0/0, ticks=27016/36417, in_queue=63433, util=98.43% 00:11:09.099 nvme0n4: ios=3111/3546, merge=0/0, ticks=44857/59352, in_queue=104209, util=89.58% 00:11:09.099 07:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:09.099 07:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=649080 00:11:09.100 07:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:09.100 07:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:09.100 [global] 00:11:09.100 thread=1 00:11:09.100 invalidate=1 00:11:09.100 rw=read 00:11:09.100 time_based=1 00:11:09.100 runtime=10 00:11:09.100 ioengine=libaio 00:11:09.100 direct=1 00:11:09.100 bs=4096 00:11:09.100 iodepth=1 00:11:09.100 norandommap=1 00:11:09.100 numjobs=1 00:11:09.100 00:11:09.100 [job0] 00:11:09.100 filename=/dev/nvme0n1 00:11:09.100 [job1] 00:11:09.100 filename=/dev/nvme0n2 00:11:09.100 [job2] 00:11:09.100 filename=/dev/nvme0n3 00:11:09.100 [job3] 00:11:09.100 filename=/dev/nvme0n4 00:11:09.100 Could not set queue depth (nvme0n1) 00:11:09.100 Could not set queue depth (nvme0n2) 00:11:09.100 Could not set queue depth (nvme0n3) 00:11:09.100 Could not set queue depth (nvme0n4) 00:11:09.100 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.100 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.100 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.100 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.100 fio-3.35 00:11:09.100 Starting 4 threads 00:11:12.382 07:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:12.382 07:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:12.382 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=1863680, buflen=4096 00:11:12.382 fio: pid=649180, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:12.640 07:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:12.640 07:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:12.640 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=26853376, buflen=4096 00:11:12.640 fio: pid=649179, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:12.898 07:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:12.898 07:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:12.898 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=26951680, buflen=4096 00:11:12.898 fio: pid=649177, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:13.156 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=53731328, buflen=4096 00:11:13.156 fio: pid=649178, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:13.156 07:45:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:13.156 07:45:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:13.156 00:11:13.156 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=649177: Mon Nov 18 07:45:06 2024 00:11:13.156 read: IOPS=1862, BW=7450KiB/s (7629kB/s)(25.7MiB/3533msec) 00:11:13.156 slat (usec): min=4, max=12928, avg=12.19, stdev=159.36 00:11:13.156 clat (usec): min=178, max=41985, avg=518.78, stdev=3329.44 00:11:13.156 lat (usec): min=184, max=53987, avg=530.97, stdev=3358.30 00:11:13.156 clat percentiles (usec): 00:11:13.156 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 210], 00:11:13.156 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 237], 00:11:13.156 | 70.00th=[ 247], 80.00th=[ 265], 90.00th=[ 306], 95.00th=[ 375], 00:11:13.156 | 99.00th=[ 529], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:11:13.156 | 99.99th=[42206] 00:11:13.156 bw ( KiB/s): min= 96, max=16904, per=28.19%, avg=7906.67, stdev=8397.79, samples=6 00:11:13.156 iops : min= 24, max= 4226, avg=1976.67, stdev=2099.45, samples=6 00:11:13.156 lat (usec) : 250=72.45%, 500=26.20%, 750=0.64% 00:11:13.156 lat (msec) : 2=0.03%, 50=0.67% 00:11:13.156 cpu : usr=0.85%, sys=2.15%, ctx=6584, majf=0, minf=1 00:11:13.156 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.156 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.156 issued rwts: total=6581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.156 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.156 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=649178: Mon Nov 18 07:45:06 2024 00:11:13.156 read: IOPS=3443, BW=13.4MiB/s (14.1MB/s)(51.2MiB/3810msec) 00:11:13.156 slat (usec): min=4, max=27316, avg=15.94, stdev=337.87 00:11:13.156 clat (usec): min=162, max=41112, avg=270.52, stdev=1290.13 00:11:13.156 lat (usec): min=167, max=41130, avg=286.46, stdev=1334.25 00:11:13.156 clat percentiles (usec): 00:11:13.156 | 1.00th=[ 172], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 200], 00:11:13.156 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 227], 00:11:13.156 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 269], 95.00th=[ 289], 00:11:13.156 | 99.00th=[ 478], 99.50th=[ 498], 99.90th=[12387], 99.95th=[41157], 00:11:13.156 | 99.99th=[41157] 00:11:13.156 bw ( KiB/s): min= 176, max=16584, per=48.22%, avg=13520.00, stdev=5925.56, samples=7 00:11:13.156 iops : min= 44, max= 4146, avg=3380.00, stdev=1481.39, samples=7 00:11:13.156 lat (usec) : 250=85.14%, 500=14.38%, 750=0.34% 00:11:13.156 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.02%, 50=0.10% 00:11:13.156 cpu : usr=1.39%, sys=3.78%, ctx=13125, majf=0, minf=2 00:11:13.156 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.156 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.156 issued rwts: total=13119,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.156 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.156 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=649179: Mon Nov 18 07:45:06 2024 00:11:13.156 read: IOPS=2021, BW=8084KiB/s (8278kB/s)(25.6MiB/3244msec) 00:11:13.156 slat (nsec): min=4580, max=78047, avg=13433.57, stdev=7764.49 00:11:13.156 clat (usec): min=195, max=42112, avg=474.94, stdev=2852.83 00:11:13.156 lat (usec): min=203, max=42124, avg=488.38, stdev=2853.96 00:11:13.156 clat percentiles (usec): 00:11:13.156 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 227], 00:11:13.156 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 258], 00:11:13.156 | 70.00th=[ 273], 80.00th=[ 293], 90.00th=[ 400], 95.00th=[ 469], 00:11:13.156 | 99.00th=[ 570], 99.50th=[14746], 99.90th=[41157], 99.95th=[42206], 00:11:13.156 | 99.99th=[42206] 00:11:13.156 bw ( KiB/s): min= 96, max=15320, per=27.16%, avg=7617.33, stdev=6931.64, samples=6 00:11:13.156 iops : min= 24, max= 3830, avg=1904.33, stdev=1732.91, samples=6 00:11:13.156 lat (usec) : 250=53.27%, 500=44.30%, 750=1.85%, 1000=0.03% 00:11:13.156 lat (msec) : 2=0.03%, 20=0.02%, 50=0.49% 00:11:13.156 cpu : usr=1.20%, sys=3.15%, ctx=6558, majf=0, minf=2 00:11:13.156 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.156 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.156 issued rwts: total=6557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.156 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.156 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=649180: Mon Nov 18 07:45:06 2024 00:11:13.156 read: IOPS=153, BW=612KiB/s (626kB/s)(1820KiB/2976msec) 00:11:13.156 slat (nsec): min=6079, max=51073, avg=12910.11, stdev=8417.59 00:11:13.156 clat (usec): min=208, max=41989, avg=6473.37, stdev=14680.36 00:11:13.156 lat (usec): min=215, max=42011, avg=6486.27, stdev=14685.94 00:11:13.156 clat percentiles (usec): 00:11:13.156 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 239], 00:11:13.156 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 277], 00:11:13.156 | 70.00th=[ 297], 80.00th=[ 347], 90.00th=[41157], 95.00th=[41157], 00:11:13.156 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:13.156 | 99.99th=[42206] 00:11:13.156 bw ( KiB/s): min= 96, max= 3128, per=2.51%, avg=705.60, stdev=1354.17, samples=5 00:11:13.156 iops : min= 24, max= 782, avg=176.40, stdev=338.54, samples=5 00:11:13.156 lat (usec) : 250=42.76%, 500=41.23%, 750=0.22% 00:11:13.156 lat (msec) : 2=0.22%, 4=0.22%, 50=15.13% 00:11:13.156 cpu : usr=0.03%, sys=0.27%, ctx=457, majf=0, minf=2 00:11:13.156 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.156 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.156 issued rwts: total=456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.156 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.156 00:11:13.157 Run status group 0 (all jobs): 00:11:13.157 READ: bw=27.4MiB/s (28.7MB/s), 612KiB/s-13.4MiB/s (626kB/s-14.1MB/s), io=104MiB (109MB), run=2976-3810msec 00:11:13.157 00:11:13.157 Disk stats (read/write): 00:11:13.157 nvme0n1: ios=6576/0, merge=0/0, ticks=3230/0, in_queue=3230, util=95.51% 00:11:13.157 nvme0n2: ios=12218/0, merge=0/0, ticks=3280/0, in_queue=3280, util=94.64% 00:11:13.157 nvme0n3: ios=6121/0, merge=0/0, ticks=3492/0, in_queue=3492, util=99.78% 00:11:13.157 nvme0n4: ios=504/0, merge=0/0, ticks=3594/0, in_queue=3594, util=99.59% 00:11:13.415 07:45:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:13.415 07:45:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:13.673 07:45:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:13.673 07:45:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:13.931 07:45:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:13.931 07:45:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:14.497 07:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.497 07:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:14.756 07:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:14.756 07:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 649080 00:11:14.756 07:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:14.756 07:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:14.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.756 07:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:14.756 07:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:14.756 07:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:14.756 07:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.756 07:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:14.756 07:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.756 07:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:14.756 07:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:14.756 07:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:14.756 nvmf hotplug test: fio failed as expected 00:11:14.756 07:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.014 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:15.014 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:15.014 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:15.014 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:15.014 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:15.014 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:15.014 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:15.014 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:15.014 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:15.014 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:15.014 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:15.014 rmmod nvme_tcp 00:11:15.014 rmmod nvme_fabrics 00:11:15.014 rmmod nvme_keyring 00:11:15.014 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:15.014 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:15.014 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:15.014 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 646926 ']' 00:11:15.014 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 646926 00:11:15.014 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 646926 ']' 00:11:15.014 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 646926 00:11:15.014 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:15.014 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.014 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 646926 00:11:15.272 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.272 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.272 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 646926' 00:11:15.272 killing process with pid 646926 00:11:15.273 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 646926 00:11:15.273 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 646926 00:11:15.273 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:15.273 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:15.273 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:15.273 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:15.273 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:15.273 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:15.273 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:15.273 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:15.273 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:15.273 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.273 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.273 07:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:17.807 00:11:17.807 real 0m24.480s 00:11:17.807 user 1m24.953s 00:11:17.807 sys 0m7.770s 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.807 ************************************ 00:11:17.807 END TEST nvmf_fio_target 00:11:17.807 ************************************ 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:17.807 ************************************ 00:11:17.807 START TEST nvmf_bdevio 00:11:17.807 ************************************ 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:17.807 * Looking for test storage... 00:11:17.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:17.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.807 --rc genhtml_branch_coverage=1 00:11:17.807 --rc genhtml_function_coverage=1 00:11:17.807 --rc genhtml_legend=1 00:11:17.807 --rc geninfo_all_blocks=1 00:11:17.807 --rc geninfo_unexecuted_blocks=1 00:11:17.807 00:11:17.807 ' 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:17.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.807 --rc genhtml_branch_coverage=1 00:11:17.807 --rc genhtml_function_coverage=1 00:11:17.807 --rc genhtml_legend=1 00:11:17.807 --rc geninfo_all_blocks=1 00:11:17.807 --rc geninfo_unexecuted_blocks=1 00:11:17.807 00:11:17.807 ' 00:11:17.807 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:17.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.808 --rc genhtml_branch_coverage=1 00:11:17.808 --rc genhtml_function_coverage=1 00:11:17.808 --rc genhtml_legend=1 00:11:17.808 --rc geninfo_all_blocks=1 00:11:17.808 --rc geninfo_unexecuted_blocks=1 00:11:17.808 00:11:17.808 ' 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:17.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.808 --rc genhtml_branch_coverage=1 00:11:17.808 --rc genhtml_function_coverage=1 00:11:17.808 --rc genhtml_legend=1 00:11:17.808 --rc geninfo_all_blocks=1 00:11:17.808 --rc geninfo_unexecuted_blocks=1 00:11:17.808 00:11:17.808 ' 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:17.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:17.808 07:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:19.714 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:19.715 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:19.715 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:19.715 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:19.715 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:19.715 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:19.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:11:19.974 00:11:19.974 --- 10.0.0.2 ping statistics --- 00:11:19.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.974 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:19.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:11:19.974 00:11:19.974 --- 10.0.0.1 ping statistics --- 00:11:19.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.974 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:19.974 07:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:19.974 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:19.974 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:19.974 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:19.974 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:19.975 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=652393 00:11:19.975 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:19.975 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 652393 00:11:19.975 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 652393 ']' 00:11:19.975 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.975 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.975 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.975 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.975 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:19.975 [2024-11-18 07:45:13.060341] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:11:19.975 [2024-11-18 07:45:13.060416] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.233 [2024-11-18 07:45:13.138890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:20.233 [2024-11-18 07:45:13.189388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.233 [2024-11-18 07:45:13.189447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.233 [2024-11-18 07:45:13.189460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.233 [2024-11-18 07:45:13.189486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.233 [2024-11-18 07:45:13.189522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.233 [2024-11-18 07:45:13.191104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:20.233 [2024-11-18 07:45:13.191165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:20.233 [2024-11-18 07:45:13.191230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:20.233 [2024-11-18 07:45:13.191233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.233 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.233 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:20.233 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:20.233 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:20.233 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:20.491 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.491 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:20.491 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.491 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:20.491 [2024-11-18 07:45:13.334647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:20.491 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.491 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:20.491 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.491 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:20.491 Malloc0 00:11:20.491 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.491 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:20.491 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.491 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:20.491 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.491 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:20.491 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.491 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:20.491 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.492 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:20.492 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.492 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:20.492 [2024-11-18 07:45:13.399664] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:20.492 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.492 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:20.492 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:20.492 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:20.492 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:20.492 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:20.492 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:20.492 { 00:11:20.492 "params": { 00:11:20.492 "name": "Nvme$subsystem", 00:11:20.492 "trtype": "$TEST_TRANSPORT", 00:11:20.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:20.492 "adrfam": "ipv4", 00:11:20.492 "trsvcid": "$NVMF_PORT", 00:11:20.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:20.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:20.492 "hdgst": ${hdgst:-false}, 00:11:20.492 "ddgst": ${ddgst:-false} 00:11:20.492 }, 00:11:20.492 "method": "bdev_nvme_attach_controller" 00:11:20.492 } 00:11:20.492 EOF 00:11:20.492 )") 00:11:20.492 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:20.492 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:20.492 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:20.492 07:45:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:20.492 "params": { 00:11:20.492 "name": "Nvme1", 00:11:20.492 "trtype": "tcp", 00:11:20.492 "traddr": "10.0.0.2", 00:11:20.492 "adrfam": "ipv4", 00:11:20.492 "trsvcid": "4420", 00:11:20.492 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:20.492 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:20.492 "hdgst": false, 00:11:20.492 "ddgst": false 00:11:20.492 }, 00:11:20.492 "method": "bdev_nvme_attach_controller" 00:11:20.492 }' 00:11:20.492 [2024-11-18 07:45:13.452120] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:11:20.492 [2024-11-18 07:45:13.452209] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid652469 ] 00:11:20.492 [2024-11-18 07:45:13.524482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:20.492 [2024-11-18 07:45:13.576111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.492 [2024-11-18 07:45:13.576161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.492 [2024-11-18 07:45:13.576165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.057 I/O targets: 00:11:21.057 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:21.057 00:11:21.057 00:11:21.057 CUnit - A unit testing framework for C - Version 2.1-3 00:11:21.057 http://cunit.sourceforge.net/ 00:11:21.057 00:11:21.057 00:11:21.057 Suite: bdevio tests on: Nvme1n1 00:11:21.057 Test: blockdev write read block ...passed 00:11:21.057 Test: blockdev write zeroes read block ...passed 00:11:21.057 Test: blockdev write zeroes read no split ...passed 00:11:21.057 Test: blockdev write zeroes read split ...passed 00:11:21.057 Test: blockdev write zeroes read split partial ...passed 00:11:21.057 Test: blockdev reset ...[2024-11-18 07:45:14.034960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:21.057 [2024-11-18 07:45:14.035079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b9ac0 (9): Bad file descriptor 00:11:21.057 [2024-11-18 07:45:14.051624] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:21.057 passed 00:11:21.057 Test: blockdev write read 8 blocks ...passed 00:11:21.057 Test: blockdev write read size > 128k ...passed 00:11:21.057 Test: blockdev write read invalid size ...passed 00:11:21.057 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:21.057 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:21.057 Test: blockdev write read max offset ...passed 00:11:21.315 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:21.315 Test: blockdev writev readv 8 blocks ...passed 00:11:21.315 Test: blockdev writev readv 30 x 1block ...passed 00:11:21.315 Test: blockdev writev readv block ...passed 00:11:21.315 Test: blockdev writev readv size > 128k ...passed 00:11:21.315 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:21.315 Test: blockdev comparev and writev ...[2024-11-18 07:45:14.263640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:21.315 [2024-11-18 07:45:14.263680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:21.315 [2024-11-18 07:45:14.263707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:21.315 [2024-11-18 07:45:14.263725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:21.315 [2024-11-18 07:45:14.264057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:21.315 [2024-11-18 07:45:14.264083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:21.315 [2024-11-18 07:45:14.264107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:21.315 [2024-11-18 07:45:14.264124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:21.315 [2024-11-18 07:45:14.264433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:21.315 [2024-11-18 07:45:14.264458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:21.315 [2024-11-18 07:45:14.264481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:21.315 [2024-11-18 07:45:14.264508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:21.315 [2024-11-18 07:45:14.264840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:21.315 [2024-11-18 07:45:14.264864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:21.315 [2024-11-18 07:45:14.264886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:21.315 [2024-11-18 07:45:14.264903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:21.315 passed 00:11:21.315 Test: blockdev nvme passthru rw ...passed 00:11:21.315 Test: blockdev nvme passthru vendor specific ...[2024-11-18 07:45:14.347732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:21.315 [2024-11-18 07:45:14.347761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:21.315 [2024-11-18 07:45:14.347913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:21.315 [2024-11-18 07:45:14.347936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:21.315 [2024-11-18 07:45:14.348085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:21.315 [2024-11-18 07:45:14.348109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:21.315 [2024-11-18 07:45:14.348262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:21.315 [2024-11-18 07:45:14.348287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:21.315 passed 00:11:21.315 Test: blockdev nvme admin passthru ...passed 00:11:21.573 Test: blockdev copy ...passed 00:11:21.573 00:11:21.573 Run Summary: Type Total Ran Passed Failed Inactive 00:11:21.573 suites 1 1 n/a 0 0 00:11:21.573 tests 23 23 23 0 0 00:11:21.573 asserts 152 152 152 0 n/a 00:11:21.573 00:11:21.573 Elapsed time = 0.971 seconds 00:11:21.573 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:21.573 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.573 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.573 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.573 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:21.573 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:21.573 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:21.573 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:21.573 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:21.573 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:21.573 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:21.573 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:21.573 rmmod nvme_tcp 00:11:21.573 rmmod nvme_fabrics 00:11:21.573 rmmod nvme_keyring 00:11:21.573 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:21.573 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:21.573 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:21.573 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 652393 ']' 00:11:21.573 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 652393 00:11:21.574 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 652393 ']' 00:11:21.574 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 652393 00:11:21.574 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:21.574 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.574 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 652393 00:11:21.833 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:21.833 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:21.833 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 652393' 00:11:21.833 killing process with pid 652393 00:11:21.833 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 652393 00:11:21.833 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 652393 00:11:21.833 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:21.833 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:21.833 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:21.833 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:21.833 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:21.833 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:21.833 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:21.833 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:21.833 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:21.833 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.833 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.833 07:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.371 07:45:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:24.371 00:11:24.371 real 0m6.546s 00:11:24.371 user 0m10.104s 00:11:24.371 sys 0m2.240s 00:11:24.371 07:45:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.371 07:45:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.371 ************************************ 00:11:24.371 END TEST nvmf_bdevio 00:11:24.371 ************************************ 00:11:24.371 07:45:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:24.371 00:11:24.371 real 3m56.344s 00:11:24.371 user 10m15.305s 00:11:24.371 sys 1m8.204s 00:11:24.371 07:45:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.371 07:45:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:24.371 ************************************ 00:11:24.371 END TEST nvmf_target_core 00:11:24.371 ************************************ 00:11:24.371 07:45:17 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:24.371 07:45:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:24.371 07:45:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.371 07:45:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:24.371 ************************************ 00:11:24.371 START TEST nvmf_target_extra 00:11:24.371 ************************************ 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:24.371 * Looking for test storage... 00:11:24.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:24.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.371 --rc genhtml_branch_coverage=1 00:11:24.371 --rc genhtml_function_coverage=1 00:11:24.371 --rc genhtml_legend=1 00:11:24.371 --rc geninfo_all_blocks=1 00:11:24.371 --rc geninfo_unexecuted_blocks=1 00:11:24.371 00:11:24.371 ' 00:11:24.371 07:45:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:24.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.371 --rc genhtml_branch_coverage=1 00:11:24.371 --rc genhtml_function_coverage=1 00:11:24.371 --rc genhtml_legend=1 00:11:24.371 --rc geninfo_all_blocks=1 00:11:24.371 --rc geninfo_unexecuted_blocks=1 00:11:24.371 00:11:24.372 ' 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:24.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.372 --rc genhtml_branch_coverage=1 00:11:24.372 --rc genhtml_function_coverage=1 00:11:24.372 --rc genhtml_legend=1 00:11:24.372 --rc geninfo_all_blocks=1 00:11:24.372 --rc geninfo_unexecuted_blocks=1 00:11:24.372 00:11:24.372 ' 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:24.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.372 --rc genhtml_branch_coverage=1 00:11:24.372 --rc genhtml_function_coverage=1 00:11:24.372 --rc genhtml_legend=1 00:11:24.372 --rc geninfo_all_blocks=1 00:11:24.372 --rc geninfo_unexecuted_blocks=1 00:11:24.372 00:11:24.372 ' 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:24.372 ************************************ 00:11:24.372 START TEST nvmf_example 00:11:24.372 ************************************ 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:24.372 * Looking for test storage... 00:11:24.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.372 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:24.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.373 --rc genhtml_branch_coverage=1 00:11:24.373 --rc genhtml_function_coverage=1 00:11:24.373 --rc genhtml_legend=1 00:11:24.373 --rc geninfo_all_blocks=1 00:11:24.373 --rc geninfo_unexecuted_blocks=1 00:11:24.373 00:11:24.373 ' 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:24.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.373 --rc genhtml_branch_coverage=1 00:11:24.373 --rc genhtml_function_coverage=1 00:11:24.373 --rc genhtml_legend=1 00:11:24.373 --rc geninfo_all_blocks=1 00:11:24.373 --rc geninfo_unexecuted_blocks=1 00:11:24.373 00:11:24.373 ' 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:24.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.373 --rc genhtml_branch_coverage=1 00:11:24.373 --rc genhtml_function_coverage=1 00:11:24.373 --rc genhtml_legend=1 00:11:24.373 --rc geninfo_all_blocks=1 00:11:24.373 --rc geninfo_unexecuted_blocks=1 00:11:24.373 00:11:24.373 ' 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:24.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.373 --rc genhtml_branch_coverage=1 00:11:24.373 --rc genhtml_function_coverage=1 00:11:24.373 --rc genhtml_legend=1 00:11:24.373 --rc geninfo_all_blocks=1 00:11:24.373 --rc geninfo_unexecuted_blocks=1 00:11:24.373 00:11:24.373 ' 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:24.373 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:24.374 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:24.374 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:24.374 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:24.374 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:24.374 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:24.374 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:24.374 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.374 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:24.374 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:24.374 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:24.374 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.374 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.374 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.374 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:24.374 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:24.374 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:24.374 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:26.902 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:26.902 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:26.902 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:26.902 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:26.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:11:26.902 00:11:26.902 --- 10.0.0.2 ping statistics --- 00:11:26.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.902 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:11:26.902 00:11:26.902 --- 10.0.0.1 ping statistics --- 00:11:26.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.902 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=654610 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 654610 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 654610 ']' 00:11:26.902 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.903 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.903 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.903 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.903 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:26.903 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.903 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:26.903 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:26.903 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:26.903 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.160 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:27.160 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.160 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.160 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.160 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:27.160 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.160 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.160 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.160 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:27.161 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:27.161 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.161 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.161 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.161 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:27.161 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:27.161 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.161 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.161 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.161 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.161 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.161 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.161 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.161 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:27.161 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:39.357 Initializing NVMe Controllers 00:11:39.357 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:39.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:39.357 Initialization complete. Launching workers. 00:11:39.357 ======================================================== 00:11:39.357 Latency(us) 00:11:39.357 Device Information : IOPS MiB/s Average min max 00:11:39.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15111.20 59.03 4234.82 677.55 15338.42 00:11:39.357 ======================================================== 00:11:39.357 Total : 15111.20 59.03 4234.82 677.55 15338.42 00:11:39.357 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:39.357 rmmod nvme_tcp 00:11:39.357 rmmod nvme_fabrics 00:11:39.357 rmmod nvme_keyring 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 654610 ']' 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 654610 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 654610 ']' 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 654610 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 654610 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 654610' 00:11:39.357 killing process with pid 654610 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 654610 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 654610 00:11:39.357 nvmf threads initialize successfully 00:11:39.357 bdev subsystem init successfully 00:11:39.357 created a nvmf target service 00:11:39.357 create targets's poll groups done 00:11:39.357 all subsystems of target started 00:11:39.357 nvmf target is running 00:11:39.357 all subsystems of target stopped 00:11:39.357 destroy targets's poll groups done 00:11:39.357 destroyed the nvmf target service 00:11:39.357 bdev subsystem finish successfully 00:11:39.357 nvmf threads destroy successfully 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.357 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.617 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:39.617 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:39.617 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:39.617 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.878 00:11:39.878 real 0m15.466s 00:11:39.878 user 0m42.415s 00:11:39.878 sys 0m3.427s 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.878 ************************************ 00:11:39.878 END TEST nvmf_example 00:11:39.878 ************************************ 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:39.878 ************************************ 00:11:39.878 START TEST nvmf_filesystem 00:11:39.878 ************************************ 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:39.878 * Looking for test storage... 00:11:39.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:39.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.878 --rc genhtml_branch_coverage=1 00:11:39.878 --rc genhtml_function_coverage=1 00:11:39.878 --rc genhtml_legend=1 00:11:39.878 --rc geninfo_all_blocks=1 00:11:39.878 --rc geninfo_unexecuted_blocks=1 00:11:39.878 00:11:39.878 ' 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:39.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.878 --rc genhtml_branch_coverage=1 00:11:39.878 --rc genhtml_function_coverage=1 00:11:39.878 --rc genhtml_legend=1 00:11:39.878 --rc geninfo_all_blocks=1 00:11:39.878 --rc geninfo_unexecuted_blocks=1 00:11:39.878 00:11:39.878 ' 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:39.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.878 --rc genhtml_branch_coverage=1 00:11:39.878 --rc genhtml_function_coverage=1 00:11:39.878 --rc genhtml_legend=1 00:11:39.878 --rc geninfo_all_blocks=1 00:11:39.878 --rc geninfo_unexecuted_blocks=1 00:11:39.878 00:11:39.878 ' 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:39.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.878 --rc genhtml_branch_coverage=1 00:11:39.878 --rc genhtml_function_coverage=1 00:11:39.878 --rc genhtml_legend=1 00:11:39.878 --rc geninfo_all_blocks=1 00:11:39.878 --rc geninfo_unexecuted_blocks=1 00:11:39.878 00:11:39.878 ' 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:39.878 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:39.879 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:39.880 #define SPDK_CONFIG_H 00:11:39.880 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:39.880 #define SPDK_CONFIG_APPS 1 00:11:39.880 #define SPDK_CONFIG_ARCH native 00:11:39.880 #undef SPDK_CONFIG_ASAN 00:11:39.880 #undef SPDK_CONFIG_AVAHI 00:11:39.880 #undef SPDK_CONFIG_CET 00:11:39.880 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:39.880 #define SPDK_CONFIG_COVERAGE 1 00:11:39.880 #define SPDK_CONFIG_CROSS_PREFIX 00:11:39.880 #undef SPDK_CONFIG_CRYPTO 00:11:39.880 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:39.880 #undef SPDK_CONFIG_CUSTOMOCF 00:11:39.880 #undef SPDK_CONFIG_DAOS 00:11:39.880 #define SPDK_CONFIG_DAOS_DIR 00:11:39.880 #define SPDK_CONFIG_DEBUG 1 00:11:39.880 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:39.880 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:39.880 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:39.880 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:39.880 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:39.880 #undef SPDK_CONFIG_DPDK_UADK 00:11:39.880 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:39.880 #define SPDK_CONFIG_EXAMPLES 1 00:11:39.880 #undef SPDK_CONFIG_FC 00:11:39.880 #define SPDK_CONFIG_FC_PATH 00:11:39.880 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:39.880 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:39.880 #define SPDK_CONFIG_FSDEV 1 00:11:39.880 #undef SPDK_CONFIG_FUSE 00:11:39.880 #undef SPDK_CONFIG_FUZZER 00:11:39.880 #define SPDK_CONFIG_FUZZER_LIB 00:11:39.880 #undef SPDK_CONFIG_GOLANG 00:11:39.880 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:39.880 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:39.880 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:39.880 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:39.880 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:39.880 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:39.880 #undef SPDK_CONFIG_HAVE_LZ4 00:11:39.880 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:39.880 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:39.880 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:39.880 #define SPDK_CONFIG_IDXD 1 00:11:39.880 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:39.880 #undef SPDK_CONFIG_IPSEC_MB 00:11:39.880 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:39.880 #define SPDK_CONFIG_ISAL 1 00:11:39.880 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:39.880 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:39.880 #define SPDK_CONFIG_LIBDIR 00:11:39.880 #undef SPDK_CONFIG_LTO 00:11:39.880 #define SPDK_CONFIG_MAX_LCORES 128 00:11:39.880 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:39.880 #define SPDK_CONFIG_NVME_CUSE 1 00:11:39.880 #undef SPDK_CONFIG_OCF 00:11:39.880 #define SPDK_CONFIG_OCF_PATH 00:11:39.880 #define SPDK_CONFIG_OPENSSL_PATH 00:11:39.880 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:39.880 #define SPDK_CONFIG_PGO_DIR 00:11:39.880 #undef SPDK_CONFIG_PGO_USE 00:11:39.880 #define SPDK_CONFIG_PREFIX /usr/local 00:11:39.880 #undef SPDK_CONFIG_RAID5F 00:11:39.880 #undef SPDK_CONFIG_RBD 00:11:39.880 #define SPDK_CONFIG_RDMA 1 00:11:39.880 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:39.880 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:39.880 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:39.880 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:39.880 #define SPDK_CONFIG_SHARED 1 00:11:39.880 #undef SPDK_CONFIG_SMA 00:11:39.880 #define SPDK_CONFIG_TESTS 1 00:11:39.880 #undef SPDK_CONFIG_TSAN 00:11:39.880 #define SPDK_CONFIG_UBLK 1 00:11:39.880 #define SPDK_CONFIG_UBSAN 1 00:11:39.880 #undef SPDK_CONFIG_UNIT_TESTS 00:11:39.880 #undef SPDK_CONFIG_URING 00:11:39.880 #define SPDK_CONFIG_URING_PATH 00:11:39.880 #undef SPDK_CONFIG_URING_ZNS 00:11:39.880 #undef SPDK_CONFIG_USDT 00:11:39.880 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:39.880 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:39.880 #define SPDK_CONFIG_VFIO_USER 1 00:11:39.880 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:39.880 #define SPDK_CONFIG_VHOST 1 00:11:39.880 #define SPDK_CONFIG_VIRTIO 1 00:11:39.880 #undef SPDK_CONFIG_VTUNE 00:11:39.880 #define SPDK_CONFIG_VTUNE_DIR 00:11:39.880 #define SPDK_CONFIG_WERROR 1 00:11:39.880 #define SPDK_CONFIG_WPDK_DIR 00:11:39.880 #undef SPDK_CONFIG_XNVME 00:11:39.880 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.880 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.881 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.881 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:39.881 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.881 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:39.881 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:39.881 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:39.881 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:39.881 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:39.881 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:39.881 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:39.881 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:39.881 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:40.142 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:40.143 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 656297 ]] 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 656297 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:40.144 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.vIKIrw 00:11:40.144 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:40.144 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:40.144 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.vIKIrw/tests/target /tmp/spdk.vIKIrw 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=53503328256 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988532224 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=8485203968 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30984232960 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375277568 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22429696 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993928192 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994268160 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=339968 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:40.145 * Looking for test storage... 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=53503328256 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=10699796480 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.145 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:40.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.146 --rc genhtml_branch_coverage=1 00:11:40.146 --rc genhtml_function_coverage=1 00:11:40.146 --rc genhtml_legend=1 00:11:40.146 --rc geninfo_all_blocks=1 00:11:40.146 --rc geninfo_unexecuted_blocks=1 00:11:40.146 00:11:40.146 ' 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:40.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.146 --rc genhtml_branch_coverage=1 00:11:40.146 --rc genhtml_function_coverage=1 00:11:40.146 --rc genhtml_legend=1 00:11:40.146 --rc geninfo_all_blocks=1 00:11:40.146 --rc geninfo_unexecuted_blocks=1 00:11:40.146 00:11:40.146 ' 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:40.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.146 --rc genhtml_branch_coverage=1 00:11:40.146 --rc genhtml_function_coverage=1 00:11:40.146 --rc genhtml_legend=1 00:11:40.146 --rc geninfo_all_blocks=1 00:11:40.146 --rc geninfo_unexecuted_blocks=1 00:11:40.146 00:11:40.146 ' 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:40.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.146 --rc genhtml_branch_coverage=1 00:11:40.146 --rc genhtml_function_coverage=1 00:11:40.146 --rc genhtml_legend=1 00:11:40.146 --rc geninfo_all_blocks=1 00:11:40.146 --rc geninfo_unexecuted_blocks=1 00:11:40.146 00:11:40.146 ' 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:40.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:40.146 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:40.147 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:40.147 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.147 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.147 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.147 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:40.147 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:40.147 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:40.147 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:42.680 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:42.680 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:42.680 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:42.680 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.680 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:42.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:11:42.681 00:11:42.681 --- 10.0.0.2 ping statistics --- 00:11:42.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.681 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:11:42.681 00:11:42.681 --- 10.0.0.1 ping statistics --- 00:11:42.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.681 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.681 ************************************ 00:11:42.681 START TEST nvmf_filesystem_no_in_capsule 00:11:42.681 ************************************ 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=657947 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 657947 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 657947 ']' 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.681 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.681 [2024-11-18 07:45:35.556040] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:11:42.681 [2024-11-18 07:45:35.556144] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.681 [2024-11-18 07:45:35.628210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.681 [2024-11-18 07:45:35.672700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.681 [2024-11-18 07:45:35.672755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.681 [2024-11-18 07:45:35.672778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.681 [2024-11-18 07:45:35.672788] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.681 [2024-11-18 07:45:35.672798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.681 [2024-11-18 07:45:35.674218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.681 [2024-11-18 07:45:35.674326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.681 [2024-11-18 07:45:35.674400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.681 [2024-11-18 07:45:35.674403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.939 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.939 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:42.939 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:42.939 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.939 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.939 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.939 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:42.939 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:42.939 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.939 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.939 [2024-11-18 07:45:35.816102] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.939 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.940 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:42.940 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.940 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.940 Malloc1 00:11:42.940 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.940 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:42.940 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.940 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.940 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.940 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:42.940 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.940 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.940 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.940 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.940 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.940 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.940 [2024-11-18 07:45:36.006339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.940 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.940 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:42.940 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:42.940 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:42.940 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:42.940 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:42.940 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:42.940 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.940 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.940 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.940 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:42.940 { 00:11:42.940 "name": "Malloc1", 00:11:42.940 "aliases": [ 00:11:42.940 "4c5108b3-2dd6-46a6-ad8e-3b5d4644bc8a" 00:11:42.940 ], 00:11:42.940 "product_name": "Malloc disk", 00:11:42.940 "block_size": 512, 00:11:42.940 "num_blocks": 1048576, 00:11:42.940 "uuid": "4c5108b3-2dd6-46a6-ad8e-3b5d4644bc8a", 00:11:42.940 "assigned_rate_limits": { 00:11:42.940 "rw_ios_per_sec": 0, 00:11:42.940 "rw_mbytes_per_sec": 0, 00:11:42.940 "r_mbytes_per_sec": 0, 00:11:42.940 "w_mbytes_per_sec": 0 00:11:42.940 }, 00:11:42.940 "claimed": true, 00:11:42.940 "claim_type": "exclusive_write", 00:11:42.940 "zoned": false, 00:11:42.940 "supported_io_types": { 00:11:42.940 "read": true, 00:11:42.940 "write": true, 00:11:42.940 "unmap": true, 00:11:42.940 "flush": true, 00:11:42.940 "reset": true, 00:11:42.940 "nvme_admin": false, 00:11:42.940 "nvme_io": false, 00:11:42.940 "nvme_io_md": false, 00:11:42.940 "write_zeroes": true, 00:11:42.940 "zcopy": true, 00:11:42.940 "get_zone_info": false, 00:11:42.940 "zone_management": false, 00:11:42.940 "zone_append": false, 00:11:42.940 "compare": false, 00:11:42.940 "compare_and_write": false, 00:11:42.940 "abort": true, 00:11:42.940 "seek_hole": false, 00:11:42.940 "seek_data": false, 00:11:42.940 "copy": true, 00:11:42.940 "nvme_iov_md": false 00:11:42.940 }, 00:11:42.940 "memory_domains": [ 00:11:42.940 { 00:11:42.940 "dma_device_id": "system", 00:11:42.940 "dma_device_type": 1 00:11:42.940 }, 00:11:42.940 { 00:11:42.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.940 "dma_device_type": 2 00:11:42.940 } 00:11:42.940 ], 00:11:42.940 "driver_specific": {} 00:11:42.940 } 00:11:42.940 ]' 00:11:43.228 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:43.228 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:43.228 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:43.228 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:43.228 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:43.228 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:43.228 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:43.228 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:43.812 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:43.812 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:43.812 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:43.812 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:43.812 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:46.337 07:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:46.337 07:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:46.337 07:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.337 07:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:46.337 07:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.337 07:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:46.337 07:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:46.337 07:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:46.337 07:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:46.337 07:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:46.337 07:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:46.337 07:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:46.337 07:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:46.337 07:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:46.337 07:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:46.337 07:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:46.337 07:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:46.337 07:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:46.902 07:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:47.835 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:47.835 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:47.835 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:47.835 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.835 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.835 ************************************ 00:11:47.835 START TEST filesystem_ext4 00:11:47.835 ************************************ 00:11:47.835 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:47.835 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:47.835 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:47.835 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:47.835 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:47.835 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:47.836 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:47.836 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:47.836 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:47.836 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:47.836 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:47.836 mke2fs 1.47.0 (5-Feb-2023) 00:11:47.836 Discarding device blocks: 0/522240 done 00:11:47.836 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:47.836 Filesystem UUID: 203b48ff-b972-49b5-8c8d-c17eef616382 00:11:47.836 Superblock backups stored on blocks: 00:11:47.836 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:47.836 00:11:47.836 Allocating group tables: 0/64 done 00:11:47.836 Writing inode tables: 0/64 done 00:11:49.211 Creating journal (8192 blocks): done 00:11:49.211 Writing superblocks and filesystem accounting information: 0/64 done 00:11:49.211 00:11:49.211 07:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:49.211 07:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:55.768 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:55.768 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:55.768 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:55.768 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:55.768 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:55.768 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 657947 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:55.768 00:11:55.768 real 0m7.260s 00:11:55.768 user 0m0.021s 00:11:55.768 sys 0m0.053s 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:55.768 ************************************ 00:11:55.768 END TEST filesystem_ext4 00:11:55.768 ************************************ 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.768 ************************************ 00:11:55.768 START TEST filesystem_btrfs 00:11:55.768 ************************************ 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:55.768 btrfs-progs v6.8.1 00:11:55.768 See https://btrfs.readthedocs.io for more information. 00:11:55.768 00:11:55.768 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:55.768 NOTE: several default settings have changed in version 5.15, please make sure 00:11:55.768 this does not affect your deployments: 00:11:55.768 - DUP for metadata (-m dup) 00:11:55.768 - enabled no-holes (-O no-holes) 00:11:55.768 - enabled free-space-tree (-R free-space-tree) 00:11:55.768 00:11:55.768 Label: (null) 00:11:55.768 UUID: 7c92a723-3a52-4e80-a332-2e4a2dcd111e 00:11:55.768 Node size: 16384 00:11:55.768 Sector size: 4096 (CPU page size: 4096) 00:11:55.768 Filesystem size: 510.00MiB 00:11:55.768 Block group profiles: 00:11:55.768 Data: single 8.00MiB 00:11:55.768 Metadata: DUP 32.00MiB 00:11:55.768 System: DUP 8.00MiB 00:11:55.768 SSD detected: yes 00:11:55.768 Zoned device: no 00:11:55.768 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:55.768 Checksum: crc32c 00:11:55.768 Number of devices: 1 00:11:55.768 Devices: 00:11:55.768 ID SIZE PATH 00:11:55.768 1 510.00MiB /dev/nvme0n1p1 00:11:55.768 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:55.768 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:56.026 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:56.026 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:56.026 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:56.026 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:56.026 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:56.026 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:56.026 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 657947 00:11:56.026 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:56.026 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:56.026 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:56.026 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:56.026 00:11:56.026 real 0m0.985s 00:11:56.026 user 0m0.015s 00:11:56.026 sys 0m0.100s 00:11:56.026 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.026 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:56.026 ************************************ 00:11:56.026 END TEST filesystem_btrfs 00:11:56.026 ************************************ 00:11:56.026 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:56.027 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:56.027 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.027 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.027 ************************************ 00:11:56.027 START TEST filesystem_xfs 00:11:56.027 ************************************ 00:11:56.027 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:56.027 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:56.027 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:56.027 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:56.027 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:56.027 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:56.027 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:56.027 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:56.027 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:56.027 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:56.027 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:56.285 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:56.285 = sectsz=512 attr=2, projid32bit=1 00:11:56.285 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:56.285 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:56.285 data = bsize=4096 blocks=130560, imaxpct=25 00:11:56.285 = sunit=0 swidth=0 blks 00:11:56.285 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:56.285 log =internal log bsize=4096 blocks=16384, version=2 00:11:56.285 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:56.285 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:56.849 Discarding blocks...Done. 00:11:56.849 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:56.849 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:59.377 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:59.377 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:59.377 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:59.377 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:59.377 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:59.377 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:59.377 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 657947 00:11:59.377 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:59.377 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:59.377 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:59.377 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:59.377 00:11:59.377 real 0m3.080s 00:11:59.377 user 0m0.019s 00:11:59.377 sys 0m0.060s 00:11:59.377 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.377 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:59.377 ************************************ 00:11:59.377 END TEST filesystem_xfs 00:11:59.377 ************************************ 00:11:59.377 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:59.377 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:59.377 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 657947 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 657947 ']' 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 657947 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 657947 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 657947' 00:11:59.635 killing process with pid 657947 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 657947 00:11:59.635 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 657947 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:00.202 00:12:00.202 real 0m17.521s 00:12:00.202 user 1m7.907s 00:12:00.202 sys 0m2.218s 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.202 ************************************ 00:12:00.202 END TEST nvmf_filesystem_no_in_capsule 00:12:00.202 ************************************ 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:00.202 ************************************ 00:12:00.202 START TEST nvmf_filesystem_in_capsule 00:12:00.202 ************************************ 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=660301 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 660301 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 660301 ']' 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.202 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.202 [2024-11-18 07:45:53.124430] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:00.202 [2024-11-18 07:45:53.124549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.202 [2024-11-18 07:45:53.197073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.202 [2024-11-18 07:45:53.246855] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.202 [2024-11-18 07:45:53.246917] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.202 [2024-11-18 07:45:53.246931] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.202 [2024-11-18 07:45:53.246942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.202 [2024-11-18 07:45:53.246951] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.202 [2024-11-18 07:45:53.248500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.202 [2024-11-18 07:45:53.248557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.202 [2024-11-18 07:45:53.248623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.202 [2024-11-18 07:45:53.248626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.461 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.461 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:00.461 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:00.461 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:00.461 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.461 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.461 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:00.461 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:00.461 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.461 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.461 [2024-11-18 07:45:53.398464] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.461 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.461 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:00.461 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.461 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.720 Malloc1 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.720 [2024-11-18 07:45:53.589160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.720 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:00.720 { 00:12:00.720 "name": "Malloc1", 00:12:00.720 "aliases": [ 00:12:00.720 "a07494a7-3312-47ef-96c1-db02633650dd" 00:12:00.720 ], 00:12:00.720 "product_name": "Malloc disk", 00:12:00.720 "block_size": 512, 00:12:00.720 "num_blocks": 1048576, 00:12:00.720 "uuid": "a07494a7-3312-47ef-96c1-db02633650dd", 00:12:00.720 "assigned_rate_limits": { 00:12:00.721 "rw_ios_per_sec": 0, 00:12:00.721 "rw_mbytes_per_sec": 0, 00:12:00.721 "r_mbytes_per_sec": 0, 00:12:00.721 "w_mbytes_per_sec": 0 00:12:00.721 }, 00:12:00.721 "claimed": true, 00:12:00.721 "claim_type": "exclusive_write", 00:12:00.721 "zoned": false, 00:12:00.721 "supported_io_types": { 00:12:00.721 "read": true, 00:12:00.721 "write": true, 00:12:00.721 "unmap": true, 00:12:00.721 "flush": true, 00:12:00.721 "reset": true, 00:12:00.721 "nvme_admin": false, 00:12:00.721 "nvme_io": false, 00:12:00.721 "nvme_io_md": false, 00:12:00.721 "write_zeroes": true, 00:12:00.721 "zcopy": true, 00:12:00.721 "get_zone_info": false, 00:12:00.721 "zone_management": false, 00:12:00.721 "zone_append": false, 00:12:00.721 "compare": false, 00:12:00.721 "compare_and_write": false, 00:12:00.721 "abort": true, 00:12:00.721 "seek_hole": false, 00:12:00.721 "seek_data": false, 00:12:00.721 "copy": true, 00:12:00.721 "nvme_iov_md": false 00:12:00.721 }, 00:12:00.721 "memory_domains": [ 00:12:00.721 { 00:12:00.721 "dma_device_id": "system", 00:12:00.721 "dma_device_type": 1 00:12:00.721 }, 00:12:00.721 { 00:12:00.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.721 "dma_device_type": 2 00:12:00.721 } 00:12:00.721 ], 00:12:00.721 "driver_specific": {} 00:12:00.721 } 00:12:00.721 ]' 00:12:00.721 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:00.721 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:00.721 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:00.721 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:00.721 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:00.721 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:00.721 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:00.721 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:01.285 07:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:01.285 07:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:01.285 07:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.285 07:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:01.285 07:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:03.812 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:03.812 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:03.812 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.812 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:03.812 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.812 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:03.812 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:03.812 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:03.812 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:03.812 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:03.812 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:03.812 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:03.812 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:03.812 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:03.812 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:03.812 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:03.812 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:03.812 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:04.744 07:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:05.678 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:05.678 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:05.678 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:05.678 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.678 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.678 ************************************ 00:12:05.678 START TEST filesystem_in_capsule_ext4 00:12:05.678 ************************************ 00:12:05.678 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:05.678 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:05.678 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:05.678 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:05.678 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:05.678 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:05.678 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:05.678 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:05.678 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:05.678 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:05.678 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:05.678 mke2fs 1.47.0 (5-Feb-2023) 00:12:05.678 Discarding device blocks: 0/522240 done 00:12:05.678 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:05.678 Filesystem UUID: d295f2f8-db52-4fa2-89c4-4f9624b3b9fb 00:12:05.678 Superblock backups stored on blocks: 00:12:05.678 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:05.678 00:12:05.678 Allocating group tables: 0/64 done 00:12:05.678 Writing inode tables: 0/64 done 00:12:07.579 Creating journal (8192 blocks): done 00:12:07.579 Writing superblocks and filesystem accounting information: 0/64 done 00:12:07.579 00:12:07.579 07:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:07.579 07:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:12.840 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:12.840 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:12.840 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:12.840 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:12.840 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:12.840 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:12.840 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 660301 00:12:12.840 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:12.840 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:12.840 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:12.840 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:12.840 00:12:12.840 real 0m7.117s 00:12:12.840 user 0m0.025s 00:12:12.840 sys 0m0.061s 00:12:12.840 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.840 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:12.840 ************************************ 00:12:12.840 END TEST filesystem_in_capsule_ext4 00:12:12.840 ************************************ 00:12:12.840 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:12.840 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:12.840 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.840 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.840 ************************************ 00:12:12.840 START TEST filesystem_in_capsule_btrfs 00:12:12.840 ************************************ 00:12:12.840 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:12.840 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:12.840 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:12.841 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:12.841 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:12.841 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:12.841 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:12.841 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:12.841 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:12.841 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:12.841 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:13.099 btrfs-progs v6.8.1 00:12:13.099 See https://btrfs.readthedocs.io for more information. 00:12:13.099 00:12:13.099 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:13.099 NOTE: several default settings have changed in version 5.15, please make sure 00:12:13.099 this does not affect your deployments: 00:12:13.099 - DUP for metadata (-m dup) 00:12:13.099 - enabled no-holes (-O no-holes) 00:12:13.099 - enabled free-space-tree (-R free-space-tree) 00:12:13.099 00:12:13.099 Label: (null) 00:12:13.099 UUID: 979edc72-7d15-4a3b-9255-81210ca75bb0 00:12:13.099 Node size: 16384 00:12:13.099 Sector size: 4096 (CPU page size: 4096) 00:12:13.099 Filesystem size: 510.00MiB 00:12:13.099 Block group profiles: 00:12:13.099 Data: single 8.00MiB 00:12:13.099 Metadata: DUP 32.00MiB 00:12:13.099 System: DUP 8.00MiB 00:12:13.099 SSD detected: yes 00:12:13.099 Zoned device: no 00:12:13.099 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:13.099 Checksum: crc32c 00:12:13.099 Number of devices: 1 00:12:13.099 Devices: 00:12:13.099 ID SIZE PATH 00:12:13.099 1 510.00MiB /dev/nvme0n1p1 00:12:13.099 00:12:13.099 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:13.099 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:13.357 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:13.357 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:13.357 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:13.357 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:13.357 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:13.357 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 660301 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:13.616 00:12:13.616 real 0m0.764s 00:12:13.616 user 0m0.023s 00:12:13.616 sys 0m0.099s 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:13.616 ************************************ 00:12:13.616 END TEST filesystem_in_capsule_btrfs 00:12:13.616 ************************************ 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.616 ************************************ 00:12:13.616 START TEST filesystem_in_capsule_xfs 00:12:13.616 ************************************ 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:13.616 07:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:13.616 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:13.616 = sectsz=512 attr=2, projid32bit=1 00:12:13.616 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:13.616 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:13.616 data = bsize=4096 blocks=130560, imaxpct=25 00:12:13.616 = sunit=0 swidth=0 blks 00:12:13.616 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:13.616 log =internal log bsize=4096 blocks=16384, version=2 00:12:13.616 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:13.616 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:14.551 Discarding blocks...Done. 00:12:14.551 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:14.551 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:17.104 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:17.104 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:17.104 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:17.104 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:17.104 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:17.104 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:17.104 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 660301 00:12:17.104 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:17.104 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:17.104 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:17.104 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:17.104 00:12:17.104 real 0m3.406s 00:12:17.104 user 0m0.020s 00:12:17.104 sys 0m0.058s 00:12:17.104 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.104 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:17.105 ************************************ 00:12:17.105 END TEST filesystem_in_capsule_xfs 00:12:17.105 ************************************ 00:12:17.105 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:17.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 660301 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 660301 ']' 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 660301 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 660301 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 660301' 00:12:17.363 killing process with pid 660301 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 660301 00:12:17.363 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 660301 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:17.932 00:12:17.932 real 0m17.700s 00:12:17.932 user 1m8.463s 00:12:17.932 sys 0m2.413s 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.932 ************************************ 00:12:17.932 END TEST nvmf_filesystem_in_capsule 00:12:17.932 ************************************ 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:17.932 rmmod nvme_tcp 00:12:17.932 rmmod nvme_fabrics 00:12:17.932 rmmod nvme_keyring 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.932 07:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.839 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:19.839 00:12:19.839 real 0m40.142s 00:12:19.839 user 2m17.477s 00:12:19.839 sys 0m6.462s 00:12:19.839 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.839 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:19.839 ************************************ 00:12:19.839 END TEST nvmf_filesystem 00:12:19.839 ************************************ 00:12:19.839 07:46:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:19.839 07:46:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:20.099 07:46:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.099 07:46:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:20.099 ************************************ 00:12:20.099 START TEST nvmf_target_discovery 00:12:20.099 ************************************ 00:12:20.099 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:20.099 * Looking for test storage... 00:12:20.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:20.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.099 --rc genhtml_branch_coverage=1 00:12:20.099 --rc genhtml_function_coverage=1 00:12:20.099 --rc genhtml_legend=1 00:12:20.099 --rc geninfo_all_blocks=1 00:12:20.099 --rc geninfo_unexecuted_blocks=1 00:12:20.099 00:12:20.099 ' 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:20.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.099 --rc genhtml_branch_coverage=1 00:12:20.099 --rc genhtml_function_coverage=1 00:12:20.099 --rc genhtml_legend=1 00:12:20.099 --rc geninfo_all_blocks=1 00:12:20.099 --rc geninfo_unexecuted_blocks=1 00:12:20.099 00:12:20.099 ' 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:20.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.099 --rc genhtml_branch_coverage=1 00:12:20.099 --rc genhtml_function_coverage=1 00:12:20.099 --rc genhtml_legend=1 00:12:20.099 --rc geninfo_all_blocks=1 00:12:20.099 --rc geninfo_unexecuted_blocks=1 00:12:20.099 00:12:20.099 ' 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:20.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.099 --rc genhtml_branch_coverage=1 00:12:20.099 --rc genhtml_function_coverage=1 00:12:20.099 --rc genhtml_legend=1 00:12:20.099 --rc geninfo_all_blocks=1 00:12:20.099 --rc geninfo_unexecuted_blocks=1 00:12:20.099 00:12:20.099 ' 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.099 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:20.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:20.100 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:22.631 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:22.631 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:22.632 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:22.632 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:22.632 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:22.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:12:22.632 00:12:22.632 --- 10.0.0.2 ping statistics --- 00:12:22.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.632 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:22.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:12:22.632 00:12:22.632 --- 10.0.0.1 ping statistics --- 00:12:22.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.632 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=664472 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 664472 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 664472 ']' 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.632 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.632 [2024-11-18 07:46:15.520136] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:22.632 [2024-11-18 07:46:15.520235] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.632 [2024-11-18 07:46:15.592575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.632 [2024-11-18 07:46:15.636307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.632 [2024-11-18 07:46:15.636379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.632 [2024-11-18 07:46:15.636402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.632 [2024-11-18 07:46:15.636413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.632 [2024-11-18 07:46:15.636422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.632 [2024-11-18 07:46:15.638033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.632 [2024-11-18 07:46:15.638099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.632 [2024-11-18 07:46:15.638207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.632 [2024-11-18 07:46:15.638210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.889 [2024-11-18 07:46:15.785069] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.889 Null1 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.889 [2024-11-18 07:46:15.825382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.889 Null2 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:22.889 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.890 Null3 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.890 Null4 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.890 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:23.147 00:12:23.147 Discovery Log Number of Records 6, Generation counter 6 00:12:23.148 =====Discovery Log Entry 0====== 00:12:23.148 trtype: tcp 00:12:23.148 adrfam: ipv4 00:12:23.148 subtype: current discovery subsystem 00:12:23.148 treq: not required 00:12:23.148 portid: 0 00:12:23.148 trsvcid: 4420 00:12:23.148 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:23.148 traddr: 10.0.0.2 00:12:23.148 eflags: explicit discovery connections, duplicate discovery information 00:12:23.148 sectype: none 00:12:23.148 =====Discovery Log Entry 1====== 00:12:23.148 trtype: tcp 00:12:23.148 adrfam: ipv4 00:12:23.148 subtype: nvme subsystem 00:12:23.148 treq: not required 00:12:23.148 portid: 0 00:12:23.148 trsvcid: 4420 00:12:23.148 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:23.148 traddr: 10.0.0.2 00:12:23.148 eflags: none 00:12:23.148 sectype: none 00:12:23.148 =====Discovery Log Entry 2====== 00:12:23.148 trtype: tcp 00:12:23.148 adrfam: ipv4 00:12:23.148 subtype: nvme subsystem 00:12:23.148 treq: not required 00:12:23.148 portid: 0 00:12:23.148 trsvcid: 4420 00:12:23.148 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:23.148 traddr: 10.0.0.2 00:12:23.148 eflags: none 00:12:23.148 sectype: none 00:12:23.148 =====Discovery Log Entry 3====== 00:12:23.148 trtype: tcp 00:12:23.148 adrfam: ipv4 00:12:23.148 subtype: nvme subsystem 00:12:23.148 treq: not required 00:12:23.148 portid: 0 00:12:23.148 trsvcid: 4420 00:12:23.148 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:23.148 traddr: 10.0.0.2 00:12:23.148 eflags: none 00:12:23.148 sectype: none 00:12:23.148 =====Discovery Log Entry 4====== 00:12:23.148 trtype: tcp 00:12:23.148 adrfam: ipv4 00:12:23.148 subtype: nvme subsystem 00:12:23.148 treq: not required 00:12:23.148 portid: 0 00:12:23.148 trsvcid: 4420 00:12:23.148 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:23.148 traddr: 10.0.0.2 00:12:23.148 eflags: none 00:12:23.148 sectype: none 00:12:23.148 =====Discovery Log Entry 5====== 00:12:23.148 trtype: tcp 00:12:23.148 adrfam: ipv4 00:12:23.148 subtype: discovery subsystem referral 00:12:23.148 treq: not required 00:12:23.148 portid: 0 00:12:23.148 trsvcid: 4430 00:12:23.148 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:23.148 traddr: 10.0.0.2 00:12:23.148 eflags: none 00:12:23.148 sectype: none 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:23.148 Perform nvmf subsystem discovery via RPC 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.148 [ 00:12:23.148 { 00:12:23.148 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:23.148 "subtype": "Discovery", 00:12:23.148 "listen_addresses": [ 00:12:23.148 { 00:12:23.148 "trtype": "TCP", 00:12:23.148 "adrfam": "IPv4", 00:12:23.148 "traddr": "10.0.0.2", 00:12:23.148 "trsvcid": "4420" 00:12:23.148 } 00:12:23.148 ], 00:12:23.148 "allow_any_host": true, 00:12:23.148 "hosts": [] 00:12:23.148 }, 00:12:23.148 { 00:12:23.148 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:23.148 "subtype": "NVMe", 00:12:23.148 "listen_addresses": [ 00:12:23.148 { 00:12:23.148 "trtype": "TCP", 00:12:23.148 "adrfam": "IPv4", 00:12:23.148 "traddr": "10.0.0.2", 00:12:23.148 "trsvcid": "4420" 00:12:23.148 } 00:12:23.148 ], 00:12:23.148 "allow_any_host": true, 00:12:23.148 "hosts": [], 00:12:23.148 "serial_number": "SPDK00000000000001", 00:12:23.148 "model_number": "SPDK bdev Controller", 00:12:23.148 "max_namespaces": 32, 00:12:23.148 "min_cntlid": 1, 00:12:23.148 "max_cntlid": 65519, 00:12:23.148 "namespaces": [ 00:12:23.148 { 00:12:23.148 "nsid": 1, 00:12:23.148 "bdev_name": "Null1", 00:12:23.148 "name": "Null1", 00:12:23.148 "nguid": "34AED71626B74FF6883A73AB1FC160FE", 00:12:23.148 "uuid": "34aed716-26b7-4ff6-883a-73ab1fc160fe" 00:12:23.148 } 00:12:23.148 ] 00:12:23.148 }, 00:12:23.148 { 00:12:23.148 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:23.148 "subtype": "NVMe", 00:12:23.148 "listen_addresses": [ 00:12:23.148 { 00:12:23.148 "trtype": "TCP", 00:12:23.148 "adrfam": "IPv4", 00:12:23.148 "traddr": "10.0.0.2", 00:12:23.148 "trsvcid": "4420" 00:12:23.148 } 00:12:23.148 ], 00:12:23.148 "allow_any_host": true, 00:12:23.148 "hosts": [], 00:12:23.148 "serial_number": "SPDK00000000000002", 00:12:23.148 "model_number": "SPDK bdev Controller", 00:12:23.148 "max_namespaces": 32, 00:12:23.148 "min_cntlid": 1, 00:12:23.148 "max_cntlid": 65519, 00:12:23.148 "namespaces": [ 00:12:23.148 { 00:12:23.148 "nsid": 1, 00:12:23.148 "bdev_name": "Null2", 00:12:23.148 "name": "Null2", 00:12:23.148 "nguid": "F0F0591E437E4A009FFA35B345F20500", 00:12:23.148 "uuid": "f0f0591e-437e-4a00-9ffa-35b345f20500" 00:12:23.148 } 00:12:23.148 ] 00:12:23.148 }, 00:12:23.148 { 00:12:23.148 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:23.148 "subtype": "NVMe", 00:12:23.148 "listen_addresses": [ 00:12:23.148 { 00:12:23.148 "trtype": "TCP", 00:12:23.148 "adrfam": "IPv4", 00:12:23.148 "traddr": "10.0.0.2", 00:12:23.148 "trsvcid": "4420" 00:12:23.148 } 00:12:23.148 ], 00:12:23.148 "allow_any_host": true, 00:12:23.148 "hosts": [], 00:12:23.148 "serial_number": "SPDK00000000000003", 00:12:23.148 "model_number": "SPDK bdev Controller", 00:12:23.148 "max_namespaces": 32, 00:12:23.148 "min_cntlid": 1, 00:12:23.148 "max_cntlid": 65519, 00:12:23.148 "namespaces": [ 00:12:23.148 { 00:12:23.148 "nsid": 1, 00:12:23.148 "bdev_name": "Null3", 00:12:23.148 "name": "Null3", 00:12:23.148 "nguid": "3BA094664C554A0A92A406756369B4C8", 00:12:23.148 "uuid": "3ba09466-4c55-4a0a-92a4-06756369b4c8" 00:12:23.148 } 00:12:23.148 ] 00:12:23.148 }, 00:12:23.148 { 00:12:23.148 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:23.148 "subtype": "NVMe", 00:12:23.148 "listen_addresses": [ 00:12:23.148 { 00:12:23.148 "trtype": "TCP", 00:12:23.148 "adrfam": "IPv4", 00:12:23.148 "traddr": "10.0.0.2", 00:12:23.148 "trsvcid": "4420" 00:12:23.148 } 00:12:23.148 ], 00:12:23.148 "allow_any_host": true, 00:12:23.148 "hosts": [], 00:12:23.148 "serial_number": "SPDK00000000000004", 00:12:23.148 "model_number": "SPDK bdev Controller", 00:12:23.148 "max_namespaces": 32, 00:12:23.148 "min_cntlid": 1, 00:12:23.148 "max_cntlid": 65519, 00:12:23.148 "namespaces": [ 00:12:23.148 { 00:12:23.148 "nsid": 1, 00:12:23.148 "bdev_name": "Null4", 00:12:23.148 "name": "Null4", 00:12:23.148 "nguid": "346BDD258AA042A7867616FAE4FADA5A", 00:12:23.148 "uuid": "346bdd25-8aa0-42a7-8676-16fae4fada5a" 00:12:23.148 } 00:12:23.148 ] 00:12:23.148 } 00:12:23.148 ] 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.148 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.149 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:23.149 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.149 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.149 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.149 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:23.149 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.149 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.149 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.149 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:23.149 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:23.149 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.149 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.149 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:23.406 rmmod nvme_tcp 00:12:23.406 rmmod nvme_fabrics 00:12:23.406 rmmod nvme_keyring 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 664472 ']' 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 664472 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 664472 ']' 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 664472 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 664472 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 664472' 00:12:23.406 killing process with pid 664472 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 664472 00:12:23.406 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 664472 00:12:23.665 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:23.665 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:23.665 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:23.665 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:23.665 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:23.665 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:23.665 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:23.665 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:23.665 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:23.665 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.665 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.665 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.597 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:25.597 00:12:25.597 real 0m5.640s 00:12:25.597 user 0m4.664s 00:12:25.597 sys 0m1.952s 00:12:25.597 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.597 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.597 ************************************ 00:12:25.597 END TEST nvmf_target_discovery 00:12:25.597 ************************************ 00:12:25.597 07:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:25.597 07:46:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:25.597 07:46:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.597 07:46:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:25.597 ************************************ 00:12:25.597 START TEST nvmf_referrals 00:12:25.597 ************************************ 00:12:25.597 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:25.879 * Looking for test storage... 00:12:25.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:25.879 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:25.879 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:25.879 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:25.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.880 --rc genhtml_branch_coverage=1 00:12:25.880 --rc genhtml_function_coverage=1 00:12:25.880 --rc genhtml_legend=1 00:12:25.880 --rc geninfo_all_blocks=1 00:12:25.880 --rc geninfo_unexecuted_blocks=1 00:12:25.880 00:12:25.880 ' 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:25.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.880 --rc genhtml_branch_coverage=1 00:12:25.880 --rc genhtml_function_coverage=1 00:12:25.880 --rc genhtml_legend=1 00:12:25.880 --rc geninfo_all_blocks=1 00:12:25.880 --rc geninfo_unexecuted_blocks=1 00:12:25.880 00:12:25.880 ' 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:25.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.880 --rc genhtml_branch_coverage=1 00:12:25.880 --rc genhtml_function_coverage=1 00:12:25.880 --rc genhtml_legend=1 00:12:25.880 --rc geninfo_all_blocks=1 00:12:25.880 --rc geninfo_unexecuted_blocks=1 00:12:25.880 00:12:25.880 ' 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:25.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.880 --rc genhtml_branch_coverage=1 00:12:25.880 --rc genhtml_function_coverage=1 00:12:25.880 --rc genhtml_legend=1 00:12:25.880 --rc geninfo_all_blocks=1 00:12:25.880 --rc geninfo_unexecuted_blocks=1 00:12:25.880 00:12:25.880 ' 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:25.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:25.880 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:25.881 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:25.881 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:25.881 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:25.881 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.881 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:25.881 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:25.881 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:25.881 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.881 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.881 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.881 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:25.881 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:25.881 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:25.881 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:28.411 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:28.412 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:28.412 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:28.412 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:28.412 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:28.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:12:28.412 00:12:28.412 --- 10.0.0.2 ping statistics --- 00:12:28.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.412 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:28.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:12:28.412 00:12:28.412 --- 10.0.0.1 ping statistics --- 00:12:28.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.412 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=666576 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 666576 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 666576 ']' 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.412 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.412 [2024-11-18 07:46:21.281296] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:28.412 [2024-11-18 07:46:21.281383] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.412 [2024-11-18 07:46:21.355533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.412 [2024-11-18 07:46:21.405891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.412 [2024-11-18 07:46:21.405953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.412 [2024-11-18 07:46:21.405966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.413 [2024-11-18 07:46:21.405978] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.413 [2024-11-18 07:46:21.405987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.413 [2024-11-18 07:46:21.407549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.413 [2024-11-18 07:46:21.407612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.413 [2024-11-18 07:46:21.407678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.413 [2024-11-18 07:46:21.407681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.670 [2024-11-18 07:46:21.557810] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.670 [2024-11-18 07:46:21.570052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.670 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:28.671 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:28.928 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:29.186 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:29.443 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:29.443 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:29.443 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:29.443 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:29.443 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:29.443 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.443 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:29.700 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:29.700 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:29.700 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:29.700 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:29.700 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.700 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:29.700 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:29.700 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:29.700 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.700 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.700 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.700 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:29.700 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:29.700 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:29.700 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:29.700 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.700 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:29.700 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.700 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.958 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:29.958 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:29.958 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:29.958 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:29.958 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:29.958 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.958 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:29.958 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:29.958 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:29.958 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:29.958 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:29.958 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:29.958 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:29.958 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.958 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:30.216 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:30.474 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:30.474 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:30.474 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:30.474 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:30.474 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:30.474 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:30.474 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:30.474 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:30.474 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:30.474 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:30.474 rmmod nvme_tcp 00:12:30.474 rmmod nvme_fabrics 00:12:30.474 rmmod nvme_keyring 00:12:30.474 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:30.474 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:30.474 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:30.474 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 666576 ']' 00:12:30.474 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 666576 00:12:30.475 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 666576 ']' 00:12:30.475 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 666576 00:12:30.475 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:30.475 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.475 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 666576 00:12:30.475 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.475 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.475 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 666576' 00:12:30.475 killing process with pid 666576 00:12:30.475 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 666576 00:12:30.475 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 666576 00:12:30.734 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:30.734 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:30.734 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:30.734 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:30.734 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:30.734 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:30.734 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:30.734 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:30.734 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:30.734 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.734 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.734 07:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:33.273 00:12:33.273 real 0m7.134s 00:12:33.273 user 0m10.750s 00:12:33.273 sys 0m2.462s 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.273 ************************************ 00:12:33.273 END TEST nvmf_referrals 00:12:33.273 ************************************ 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:33.273 ************************************ 00:12:33.273 START TEST nvmf_connect_disconnect 00:12:33.273 ************************************ 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:33.273 * Looking for test storage... 00:12:33.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:33.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.273 --rc genhtml_branch_coverage=1 00:12:33.273 --rc genhtml_function_coverage=1 00:12:33.273 --rc genhtml_legend=1 00:12:33.273 --rc geninfo_all_blocks=1 00:12:33.273 --rc geninfo_unexecuted_blocks=1 00:12:33.273 00:12:33.273 ' 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:33.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.273 --rc genhtml_branch_coverage=1 00:12:33.273 --rc genhtml_function_coverage=1 00:12:33.273 --rc genhtml_legend=1 00:12:33.273 --rc geninfo_all_blocks=1 00:12:33.273 --rc geninfo_unexecuted_blocks=1 00:12:33.273 00:12:33.273 ' 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:33.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.273 --rc genhtml_branch_coverage=1 00:12:33.273 --rc genhtml_function_coverage=1 00:12:33.273 --rc genhtml_legend=1 00:12:33.273 --rc geninfo_all_blocks=1 00:12:33.273 --rc geninfo_unexecuted_blocks=1 00:12:33.273 00:12:33.273 ' 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:33.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.273 --rc genhtml_branch_coverage=1 00:12:33.273 --rc genhtml_function_coverage=1 00:12:33.273 --rc genhtml_legend=1 00:12:33.273 --rc geninfo_all_blocks=1 00:12:33.273 --rc geninfo_unexecuted_blocks=1 00:12:33.273 00:12:33.273 ' 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.273 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:33.274 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:33.274 07:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:35.176 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:35.177 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:35.177 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:35.177 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:35.177 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:35.177 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:35.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:12:35.435 00:12:35.435 --- 10.0.0.2 ping statistics --- 00:12:35.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.435 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:35.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:12:35.435 00:12:35.435 --- 10.0.0.1 ping statistics --- 00:12:35.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.435 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=668874 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 668874 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 668874 ']' 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:35.435 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.435 [2024-11-18 07:46:28.380941] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:35.435 [2024-11-18 07:46:28.381034] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.435 [2024-11-18 07:46:28.459312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.435 [2024-11-18 07:46:28.506852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.435 [2024-11-18 07:46:28.506910] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.435 [2024-11-18 07:46:28.506933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.435 [2024-11-18 07:46:28.506943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.435 [2024-11-18 07:46:28.506952] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.435 [2024-11-18 07:46:28.508515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.435 [2024-11-18 07:46:28.508604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.435 [2024-11-18 07:46:28.508682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.435 [2024-11-18 07:46:28.508685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.693 [2024-11-18 07:46:28.643046] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.693 [2024-11-18 07:46:28.705225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:35.693 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:38.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.847 [2024-11-18 07:48:18.478285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b510 is same with the state(6) to be set 00:14:25.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.322 [2024-11-18 07:48:27.875439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee6b70 is same with the state(6) to be set 00:14:35.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.096 [2024-11-18 07:49:42.580264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ad20 is same with the state(6) to be set 00:15:50.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:29.339 rmmod nvme_tcp 00:16:29.339 rmmod nvme_fabrics 00:16:29.339 rmmod nvme_keyring 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 668874 ']' 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 668874 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 668874 ']' 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 668874 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 668874 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 668874' 00:16:29.339 killing process with pid 668874 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 668874 00:16:29.339 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 668874 00:16:29.598 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:29.598 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:29.598 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:29.598 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:29.598 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:29.598 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:29.598 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:29.598 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:29.598 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:29.598 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.598 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:29.598 07:50:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.508 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:31.508 00:16:31.508 real 3m58.704s 00:16:31.508 user 15m9.005s 00:16:31.508 sys 0m35.896s 00:16:31.508 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.508 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:31.508 ************************************ 00:16:31.508 END TEST nvmf_connect_disconnect 00:16:31.508 ************************************ 00:16:31.508 07:50:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:31.508 07:50:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:31.508 07:50:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.508 07:50:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:31.508 ************************************ 00:16:31.508 START TEST nvmf_multitarget 00:16:31.508 ************************************ 00:16:31.508 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:31.768 * Looking for test storage... 00:16:31.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:31.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.768 --rc genhtml_branch_coverage=1 00:16:31.768 --rc genhtml_function_coverage=1 00:16:31.768 --rc genhtml_legend=1 00:16:31.768 --rc geninfo_all_blocks=1 00:16:31.768 --rc geninfo_unexecuted_blocks=1 00:16:31.768 00:16:31.768 ' 00:16:31.768 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:31.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.768 --rc genhtml_branch_coverage=1 00:16:31.768 --rc genhtml_function_coverage=1 00:16:31.768 --rc genhtml_legend=1 00:16:31.768 --rc geninfo_all_blocks=1 00:16:31.768 --rc geninfo_unexecuted_blocks=1 00:16:31.768 00:16:31.768 ' 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:31.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.769 --rc genhtml_branch_coverage=1 00:16:31.769 --rc genhtml_function_coverage=1 00:16:31.769 --rc genhtml_legend=1 00:16:31.769 --rc geninfo_all_blocks=1 00:16:31.769 --rc geninfo_unexecuted_blocks=1 00:16:31.769 00:16:31.769 ' 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:31.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.769 --rc genhtml_branch_coverage=1 00:16:31.769 --rc genhtml_function_coverage=1 00:16:31.769 --rc genhtml_legend=1 00:16:31.769 --rc geninfo_all_blocks=1 00:16:31.769 --rc geninfo_unexecuted_blocks=1 00:16:31.769 00:16:31.769 ' 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:31.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:31.769 07:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:33.677 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:33.677 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:33.677 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:33.677 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:33.678 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:33.678 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:33.678 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:33.678 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:33.678 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:33.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:16:33.937 00:16:33.937 --- 10.0.0.2 ping statistics --- 00:16:33.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.937 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:33.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:16:33.937 00:16:33.937 --- 10.0.0.1 ping statistics --- 00:16:33.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.937 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=700280 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 700280 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 700280 ']' 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.937 07:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:33.937 [2024-11-18 07:50:26.951329] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:16:33.938 [2024-11-18 07:50:26.951429] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.938 [2024-11-18 07:50:27.023450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.196 [2024-11-18 07:50:27.067605] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.196 [2024-11-18 07:50:27.067664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.196 [2024-11-18 07:50:27.067686] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.196 [2024-11-18 07:50:27.067697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.196 [2024-11-18 07:50:27.067707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.196 [2024-11-18 07:50:27.069260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.196 [2024-11-18 07:50:27.069369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.196 [2024-11-18 07:50:27.069456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.196 [2024-11-18 07:50:27.069459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.196 07:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:34.196 07:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:34.196 07:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:34.196 07:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:34.196 07:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:34.196 07:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.196 07:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:34.196 07:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:34.196 07:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:34.453 07:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:34.453 07:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:34.453 "nvmf_tgt_1" 00:16:34.453 07:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:34.711 "nvmf_tgt_2" 00:16:34.711 07:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:34.711 07:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:34.711 07:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:34.711 07:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:34.711 true 00:16:34.968 07:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:34.968 true 00:16:34.968 07:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:34.968 07:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:34.968 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:34.968 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:34.968 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:34.968 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:34.968 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:34.968 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:34.969 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:34.969 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:34.969 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:34.969 rmmod nvme_tcp 00:16:34.969 rmmod nvme_fabrics 00:16:35.227 rmmod nvme_keyring 00:16:35.227 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:35.227 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:35.227 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:35.227 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 700280 ']' 00:16:35.227 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 700280 00:16:35.227 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 700280 ']' 00:16:35.227 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 700280 00:16:35.227 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:35.227 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:35.227 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 700280 00:16:35.227 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:35.227 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:35.227 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 700280' 00:16:35.227 killing process with pid 700280 00:16:35.227 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 700280 00:16:35.227 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 700280 00:16:35.487 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:35.487 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:35.487 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:35.487 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:35.487 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:35.487 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:35.487 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:35.487 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:35.487 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:35.487 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.487 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.487 07:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.392 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:37.392 00:16:37.392 real 0m5.790s 00:16:37.392 user 0m6.677s 00:16:37.392 sys 0m1.922s 00:16:37.392 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.392 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:37.392 ************************************ 00:16:37.392 END TEST nvmf_multitarget 00:16:37.392 ************************************ 00:16:37.392 07:50:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:37.392 07:50:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:37.392 07:50:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.392 07:50:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:37.392 ************************************ 00:16:37.392 START TEST nvmf_rpc 00:16:37.392 ************************************ 00:16:37.392 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:37.651 * Looking for test storage... 00:16:37.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:37.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.651 --rc genhtml_branch_coverage=1 00:16:37.651 --rc genhtml_function_coverage=1 00:16:37.651 --rc genhtml_legend=1 00:16:37.651 --rc geninfo_all_blocks=1 00:16:37.651 --rc geninfo_unexecuted_blocks=1 00:16:37.651 00:16:37.651 ' 00:16:37.651 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:37.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.651 --rc genhtml_branch_coverage=1 00:16:37.651 --rc genhtml_function_coverage=1 00:16:37.651 --rc genhtml_legend=1 00:16:37.651 --rc geninfo_all_blocks=1 00:16:37.651 --rc geninfo_unexecuted_blocks=1 00:16:37.651 00:16:37.652 ' 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:37.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.652 --rc genhtml_branch_coverage=1 00:16:37.652 --rc genhtml_function_coverage=1 00:16:37.652 --rc genhtml_legend=1 00:16:37.652 --rc geninfo_all_blocks=1 00:16:37.652 --rc geninfo_unexecuted_blocks=1 00:16:37.652 00:16:37.652 ' 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:37.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.652 --rc genhtml_branch_coverage=1 00:16:37.652 --rc genhtml_function_coverage=1 00:16:37.652 --rc genhtml_legend=1 00:16:37.652 --rc geninfo_all_blocks=1 00:16:37.652 --rc geninfo_unexecuted_blocks=1 00:16:37.652 00:16:37.652 ' 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:37.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:37.652 07:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:39.556 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:39.557 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:39.557 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:39.557 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:39.557 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:39.557 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:39.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:16:39.816 00:16:39.816 --- 10.0.0.2 ping statistics --- 00:16:39.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.816 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:39.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:16:39.816 00:16:39.816 --- 10.0.0.1 ping statistics --- 00:16:39.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.816 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=702385 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 702385 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 702385 ']' 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:39.816 07:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.816 [2024-11-18 07:50:32.813187] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:16:39.816 [2024-11-18 07:50:32.813289] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.816 [2024-11-18 07:50:32.887144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:40.075 [2024-11-18 07:50:32.933847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.075 [2024-11-18 07:50:32.933894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.075 [2024-11-18 07:50:32.933918] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.075 [2024-11-18 07:50:32.933930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.075 [2024-11-18 07:50:32.933946] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.075 [2024-11-18 07:50:32.935634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.075 [2024-11-18 07:50:32.935687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.075 [2024-11-18 07:50:32.935711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.075 [2024-11-18 07:50:32.935714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.075 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.075 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:40.075 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:40.075 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:40.075 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.075 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.075 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:40.075 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.075 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.075 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.075 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:40.075 "tick_rate": 2700000000, 00:16:40.075 "poll_groups": [ 00:16:40.075 { 00:16:40.075 "name": "nvmf_tgt_poll_group_000", 00:16:40.075 "admin_qpairs": 0, 00:16:40.075 "io_qpairs": 0, 00:16:40.075 "current_admin_qpairs": 0, 00:16:40.075 "current_io_qpairs": 0, 00:16:40.075 "pending_bdev_io": 0, 00:16:40.075 "completed_nvme_io": 0, 00:16:40.075 "transports": [] 00:16:40.075 }, 00:16:40.075 { 00:16:40.075 "name": "nvmf_tgt_poll_group_001", 00:16:40.075 "admin_qpairs": 0, 00:16:40.075 "io_qpairs": 0, 00:16:40.075 "current_admin_qpairs": 0, 00:16:40.075 "current_io_qpairs": 0, 00:16:40.075 "pending_bdev_io": 0, 00:16:40.075 "completed_nvme_io": 0, 00:16:40.075 "transports": [] 00:16:40.075 }, 00:16:40.075 { 00:16:40.075 "name": "nvmf_tgt_poll_group_002", 00:16:40.075 "admin_qpairs": 0, 00:16:40.075 "io_qpairs": 0, 00:16:40.075 "current_admin_qpairs": 0, 00:16:40.075 "current_io_qpairs": 0, 00:16:40.075 "pending_bdev_io": 0, 00:16:40.075 "completed_nvme_io": 0, 00:16:40.075 "transports": [] 00:16:40.075 }, 00:16:40.075 { 00:16:40.075 "name": "nvmf_tgt_poll_group_003", 00:16:40.075 "admin_qpairs": 0, 00:16:40.075 "io_qpairs": 0, 00:16:40.075 "current_admin_qpairs": 0, 00:16:40.075 "current_io_qpairs": 0, 00:16:40.075 "pending_bdev_io": 0, 00:16:40.075 "completed_nvme_io": 0, 00:16:40.075 "transports": [] 00:16:40.075 } 00:16:40.075 ] 00:16:40.075 }' 00:16:40.075 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:40.075 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:40.075 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:40.075 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:40.075 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:40.075 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:40.333 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:40.333 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.334 [2024-11-18 07:50:33.170678] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:40.334 "tick_rate": 2700000000, 00:16:40.334 "poll_groups": [ 00:16:40.334 { 00:16:40.334 "name": "nvmf_tgt_poll_group_000", 00:16:40.334 "admin_qpairs": 0, 00:16:40.334 "io_qpairs": 0, 00:16:40.334 "current_admin_qpairs": 0, 00:16:40.334 "current_io_qpairs": 0, 00:16:40.334 "pending_bdev_io": 0, 00:16:40.334 "completed_nvme_io": 0, 00:16:40.334 "transports": [ 00:16:40.334 { 00:16:40.334 "trtype": "TCP" 00:16:40.334 } 00:16:40.334 ] 00:16:40.334 }, 00:16:40.334 { 00:16:40.334 "name": "nvmf_tgt_poll_group_001", 00:16:40.334 "admin_qpairs": 0, 00:16:40.334 "io_qpairs": 0, 00:16:40.334 "current_admin_qpairs": 0, 00:16:40.334 "current_io_qpairs": 0, 00:16:40.334 "pending_bdev_io": 0, 00:16:40.334 "completed_nvme_io": 0, 00:16:40.334 "transports": [ 00:16:40.334 { 00:16:40.334 "trtype": "TCP" 00:16:40.334 } 00:16:40.334 ] 00:16:40.334 }, 00:16:40.334 { 00:16:40.334 "name": "nvmf_tgt_poll_group_002", 00:16:40.334 "admin_qpairs": 0, 00:16:40.334 "io_qpairs": 0, 00:16:40.334 "current_admin_qpairs": 0, 00:16:40.334 "current_io_qpairs": 0, 00:16:40.334 "pending_bdev_io": 0, 00:16:40.334 "completed_nvme_io": 0, 00:16:40.334 "transports": [ 00:16:40.334 { 00:16:40.334 "trtype": "TCP" 00:16:40.334 } 00:16:40.334 ] 00:16:40.334 }, 00:16:40.334 { 00:16:40.334 "name": "nvmf_tgt_poll_group_003", 00:16:40.334 "admin_qpairs": 0, 00:16:40.334 "io_qpairs": 0, 00:16:40.334 "current_admin_qpairs": 0, 00:16:40.334 "current_io_qpairs": 0, 00:16:40.334 "pending_bdev_io": 0, 00:16:40.334 "completed_nvme_io": 0, 00:16:40.334 "transports": [ 00:16:40.334 { 00:16:40.334 "trtype": "TCP" 00:16:40.334 } 00:16:40.334 ] 00:16:40.334 } 00:16:40.334 ] 00:16:40.334 }' 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.334 Malloc1 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.334 [2024-11-18 07:50:33.338336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:40.334 [2024-11-18 07:50:33.360876] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:40.334 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:40.334 could not add new controller: failed to write to nvme-fabrics device 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.334 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:40.900 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:40.900 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:40.900 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:40.900 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:40.900 07:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:43.426 07:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:43.426 07:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:43.426 07:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:43.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.426 [2024-11-18 07:50:36.103522] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:43.426 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:43.426 could not add new controller: failed to write to nvme-fabrics device 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.426 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.684 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:43.684 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:43.684 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:43.684 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:43.684 07:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:46.274 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:46.274 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:46.274 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:46.274 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:46.274 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:46.274 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:46.274 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:46.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.274 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:46.274 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.275 [2024-11-18 07:50:38.958300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.275 07:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:46.532 07:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:46.532 07:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:46.532 07:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:46.532 07:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:46.532 07:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:49.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.061 [2024-11-18 07:50:41.746003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.061 07:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:49.629 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:49.629 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:49.629 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:49.629 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:49.629 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:51.537 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:51.537 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:51.537 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:51.537 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:51.537 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:51.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.538 [2024-11-18 07:50:44.616346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.538 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.798 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.798 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:51.798 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.798 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.798 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.798 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:52.368 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:52.368 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:52.368 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:52.368 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:52.368 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:54.275 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:54.275 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:54.275 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:54.275 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:54.275 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:54.275 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:54.275 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:54.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.536 [2024-11-18 07:50:47.440397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.536 07:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:55.104 07:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:55.104 07:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:55.104 07:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:55.104 07:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:55.104 07:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:57.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.643 [2024-11-18 07:50:50.348322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.643 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:58.213 07:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:58.213 07:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:58.213 07:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:58.213 07:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:58.213 07:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:00.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.119 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.119 [2024-11-18 07:50:53.180508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.120 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.120 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:00.120 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.120 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.120 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.120 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.120 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.120 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.120 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.120 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:00.120 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.120 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.379 [2024-11-18 07:50:53.228552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.379 [2024-11-18 07:50:53.276721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.379 [2024-11-18 07:50:53.324892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.379 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.380 [2024-11-18 07:50:53.373058] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:00.380 "tick_rate": 2700000000, 00:17:00.380 "poll_groups": [ 00:17:00.380 { 00:17:00.380 "name": "nvmf_tgt_poll_group_000", 00:17:00.380 "admin_qpairs": 2, 00:17:00.380 "io_qpairs": 84, 00:17:00.380 "current_admin_qpairs": 0, 00:17:00.380 "current_io_qpairs": 0, 00:17:00.380 "pending_bdev_io": 0, 00:17:00.380 "completed_nvme_io": 137, 00:17:00.380 "transports": [ 00:17:00.380 { 00:17:00.380 "trtype": "TCP" 00:17:00.380 } 00:17:00.380 ] 00:17:00.380 }, 00:17:00.380 { 00:17:00.380 "name": "nvmf_tgt_poll_group_001", 00:17:00.380 "admin_qpairs": 2, 00:17:00.380 "io_qpairs": 84, 00:17:00.380 "current_admin_qpairs": 0, 00:17:00.380 "current_io_qpairs": 0, 00:17:00.380 "pending_bdev_io": 0, 00:17:00.380 "completed_nvme_io": 134, 00:17:00.380 "transports": [ 00:17:00.380 { 00:17:00.380 "trtype": "TCP" 00:17:00.380 } 00:17:00.380 ] 00:17:00.380 }, 00:17:00.380 { 00:17:00.380 "name": "nvmf_tgt_poll_group_002", 00:17:00.380 "admin_qpairs": 1, 00:17:00.380 "io_qpairs": 84, 00:17:00.380 "current_admin_qpairs": 0, 00:17:00.380 "current_io_qpairs": 0, 00:17:00.380 "pending_bdev_io": 0, 00:17:00.380 "completed_nvme_io": 232, 00:17:00.380 "transports": [ 00:17:00.380 { 00:17:00.380 "trtype": "TCP" 00:17:00.380 } 00:17:00.380 ] 00:17:00.380 }, 00:17:00.380 { 00:17:00.380 "name": "nvmf_tgt_poll_group_003", 00:17:00.380 "admin_qpairs": 2, 00:17:00.380 "io_qpairs": 84, 00:17:00.380 "current_admin_qpairs": 0, 00:17:00.380 "current_io_qpairs": 0, 00:17:00.380 "pending_bdev_io": 0, 00:17:00.380 "completed_nvme_io": 183, 00:17:00.380 "transports": [ 00:17:00.380 { 00:17:00.380 "trtype": "TCP" 00:17:00.380 } 00:17:00.380 ] 00:17:00.380 } 00:17:00.380 ] 00:17:00.380 }' 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:00.380 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:00.640 rmmod nvme_tcp 00:17:00.640 rmmod nvme_fabrics 00:17:00.640 rmmod nvme_keyring 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 702385 ']' 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 702385 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 702385 ']' 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 702385 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 702385 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 702385' 00:17:00.640 killing process with pid 702385 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 702385 00:17:00.640 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 702385 00:17:00.901 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:00.901 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:00.901 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:00.901 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:00.901 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:00.901 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:00.901 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:00.901 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:00.901 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:00.901 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.901 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.901 07:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.810 07:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:02.810 00:17:02.810 real 0m25.454s 00:17:02.810 user 1m22.929s 00:17:02.810 sys 0m4.206s 00:17:02.810 07:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.810 07:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.810 ************************************ 00:17:02.810 END TEST nvmf_rpc 00:17:02.810 ************************************ 00:17:03.069 07:50:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:03.069 07:50:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:03.069 07:50:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.069 07:50:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:03.069 ************************************ 00:17:03.069 START TEST nvmf_invalid 00:17:03.069 ************************************ 00:17:03.069 07:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:03.070 * Looking for test storage... 00:17:03.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.070 07:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:03.070 07:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:03.070 07:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:03.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.070 --rc genhtml_branch_coverage=1 00:17:03.070 --rc genhtml_function_coverage=1 00:17:03.070 --rc genhtml_legend=1 00:17:03.070 --rc geninfo_all_blocks=1 00:17:03.070 --rc geninfo_unexecuted_blocks=1 00:17:03.070 00:17:03.070 ' 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:03.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.070 --rc genhtml_branch_coverage=1 00:17:03.070 --rc genhtml_function_coverage=1 00:17:03.070 --rc genhtml_legend=1 00:17:03.070 --rc geninfo_all_blocks=1 00:17:03.070 --rc geninfo_unexecuted_blocks=1 00:17:03.070 00:17:03.070 ' 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:03.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.070 --rc genhtml_branch_coverage=1 00:17:03.070 --rc genhtml_function_coverage=1 00:17:03.070 --rc genhtml_legend=1 00:17:03.070 --rc geninfo_all_blocks=1 00:17:03.070 --rc geninfo_unexecuted_blocks=1 00:17:03.070 00:17:03.070 ' 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:03.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.070 --rc genhtml_branch_coverage=1 00:17:03.070 --rc genhtml_function_coverage=1 00:17:03.070 --rc genhtml_legend=1 00:17:03.070 --rc geninfo_all_blocks=1 00:17:03.070 --rc geninfo_unexecuted_blocks=1 00:17:03.070 00:17:03.070 ' 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:03.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:03.070 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:03.071 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:03.071 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:03.071 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:03.071 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:03.071 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:03.071 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:03.071 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.071 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:03.071 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:03.071 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:03.071 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.071 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.071 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.071 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:03.071 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:03.071 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:03.071 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:05.608 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:05.608 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:05.608 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.608 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:05.609 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:05.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:17:05.609 00:17:05.609 --- 10.0.0.2 ping statistics --- 00:17:05.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.609 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:05.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:17:05.609 00:17:05.609 --- 10.0.0.1 ping statistics --- 00:17:05.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.609 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=707001 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 707001 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 707001 ']' 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:05.609 [2024-11-18 07:50:58.367779] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:05.609 [2024-11-18 07:50:58.367892] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.609 [2024-11-18 07:50:58.441416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:05.609 [2024-11-18 07:50:58.491568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.609 [2024-11-18 07:50:58.491633] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.609 [2024-11-18 07:50:58.491647] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.609 [2024-11-18 07:50:58.491659] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.609 [2024-11-18 07:50:58.491669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.609 [2024-11-18 07:50:58.493285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.609 [2024-11-18 07:50:58.493351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.609 [2024-11-18 07:50:58.493401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:05.609 [2024-11-18 07:50:58.493404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:05.609 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13915 00:17:05.868 [2024-11-18 07:50:58.884713] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:05.868 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:05.868 { 00:17:05.868 "nqn": "nqn.2016-06.io.spdk:cnode13915", 00:17:05.868 "tgt_name": "foobar", 00:17:05.868 "method": "nvmf_create_subsystem", 00:17:05.868 "req_id": 1 00:17:05.868 } 00:17:05.868 Got JSON-RPC error response 00:17:05.868 response: 00:17:05.868 { 00:17:05.868 "code": -32603, 00:17:05.868 "message": "Unable to find target foobar" 00:17:05.868 }' 00:17:05.868 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:05.868 { 00:17:05.868 "nqn": "nqn.2016-06.io.spdk:cnode13915", 00:17:05.868 "tgt_name": "foobar", 00:17:05.868 "method": "nvmf_create_subsystem", 00:17:05.868 "req_id": 1 00:17:05.868 } 00:17:05.868 Got JSON-RPC error response 00:17:05.868 response: 00:17:05.868 { 00:17:05.868 "code": -32603, 00:17:05.868 "message": "Unable to find target foobar" 00:17:05.868 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:05.868 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:05.868 07:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode28907 00:17:06.126 [2024-11-18 07:50:59.153616] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28907: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:06.126 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:06.126 { 00:17:06.126 "nqn": "nqn.2016-06.io.spdk:cnode28907", 00:17:06.126 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:06.126 "method": "nvmf_create_subsystem", 00:17:06.126 "req_id": 1 00:17:06.126 } 00:17:06.126 Got JSON-RPC error response 00:17:06.126 response: 00:17:06.126 { 00:17:06.126 "code": -32602, 00:17:06.126 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:06.126 }' 00:17:06.126 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:06.126 { 00:17:06.126 "nqn": "nqn.2016-06.io.spdk:cnode28907", 00:17:06.126 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:06.126 "method": "nvmf_create_subsystem", 00:17:06.126 "req_id": 1 00:17:06.126 } 00:17:06.126 Got JSON-RPC error response 00:17:06.126 response: 00:17:06.126 { 00:17:06.126 "code": -32602, 00:17:06.126 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:06.126 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:06.126 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:06.126 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25371 00:17:06.383 [2024-11-18 07:50:59.442574] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25371: invalid model number 'SPDK_Controller' 00:17:06.383 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:06.383 { 00:17:06.383 "nqn": "nqn.2016-06.io.spdk:cnode25371", 00:17:06.383 "model_number": "SPDK_Controller\u001f", 00:17:06.383 "method": "nvmf_create_subsystem", 00:17:06.383 "req_id": 1 00:17:06.383 } 00:17:06.383 Got JSON-RPC error response 00:17:06.383 response: 00:17:06.383 { 00:17:06.383 "code": -32602, 00:17:06.383 "message": "Invalid MN SPDK_Controller\u001f" 00:17:06.383 }' 00:17:06.383 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:06.383 { 00:17:06.383 "nqn": "nqn.2016-06.io.spdk:cnode25371", 00:17:06.383 "model_number": "SPDK_Controller\u001f", 00:17:06.383 "method": "nvmf_create_subsystem", 00:17:06.383 "req_id": 1 00:17:06.383 } 00:17:06.383 Got JSON-RPC error response 00:17:06.383 response: 00:17:06.383 { 00:17:06.383 "code": -32602, 00:17:06.383 "message": "Invalid MN SPDK_Controller\u001f" 00:17:06.383 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:06.383 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:06.383 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:06.383 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:06.383 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:06.383 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:06.383 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:06.383 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.383 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:06.383 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:06.383 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:06.383 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.383 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ } == \- ]] 00:17:06.642 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '}7#1Qx3D[lFqYa /dev/null' 00:17:10.005 07:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.544 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:12.544 00:17:12.544 real 0m9.107s 00:17:12.544 user 0m22.201s 00:17:12.544 sys 0m2.486s 00:17:12.544 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.544 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:12.544 ************************************ 00:17:12.544 END TEST nvmf_invalid 00:17:12.544 ************************************ 00:17:12.544 07:51:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:12.544 07:51:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:12.544 07:51:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.544 07:51:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:12.544 ************************************ 00:17:12.544 START TEST nvmf_connect_stress 00:17:12.544 ************************************ 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:12.545 * Looking for test storage... 00:17:12.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:12.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.545 --rc genhtml_branch_coverage=1 00:17:12.545 --rc genhtml_function_coverage=1 00:17:12.545 --rc genhtml_legend=1 00:17:12.545 --rc geninfo_all_blocks=1 00:17:12.545 --rc geninfo_unexecuted_blocks=1 00:17:12.545 00:17:12.545 ' 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:12.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.545 --rc genhtml_branch_coverage=1 00:17:12.545 --rc genhtml_function_coverage=1 00:17:12.545 --rc genhtml_legend=1 00:17:12.545 --rc geninfo_all_blocks=1 00:17:12.545 --rc geninfo_unexecuted_blocks=1 00:17:12.545 00:17:12.545 ' 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:12.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.545 --rc genhtml_branch_coverage=1 00:17:12.545 --rc genhtml_function_coverage=1 00:17:12.545 --rc genhtml_legend=1 00:17:12.545 --rc geninfo_all_blocks=1 00:17:12.545 --rc geninfo_unexecuted_blocks=1 00:17:12.545 00:17:12.545 ' 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:12.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.545 --rc genhtml_branch_coverage=1 00:17:12.545 --rc genhtml_function_coverage=1 00:17:12.545 --rc genhtml_legend=1 00:17:12.545 --rc geninfo_all_blocks=1 00:17:12.545 --rc geninfo_unexecuted_blocks=1 00:17:12.545 00:17:12.545 ' 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.545 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:12.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:12.546 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:12.546 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:12.546 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:12.546 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:12.546 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:12.546 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:12.546 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:12.546 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:12.546 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:12.546 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.546 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.546 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.546 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:12.546 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:12.546 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:12.546 07:51:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.454 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:14.454 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:14.454 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:14.454 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:14.454 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:14.455 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:14.455 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:14.455 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:14.455 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:14.455 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:14.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:17:14.456 00:17:14.456 --- 10.0.0.2 ping statistics --- 00:17:14.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.456 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:14.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:17:14.456 00:17:14.456 --- 10.0.0.1 ping statistics --- 00:17:14.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.456 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=709988 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 709988 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 709988 ']' 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:14.456 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.714 [2024-11-18 07:51:07.570820] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:14.714 [2024-11-18 07:51:07.570914] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.714 [2024-11-18 07:51:07.644733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:14.714 [2024-11-18 07:51:07.693557] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.714 [2024-11-18 07:51:07.693622] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.714 [2024-11-18 07:51:07.693637] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.714 [2024-11-18 07:51:07.693648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.714 [2024-11-18 07:51:07.693658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.714 [2024-11-18 07:51:07.695259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.714 [2024-11-18 07:51:07.695319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.714 [2024-11-18 07:51:07.695315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.973 [2024-11-18 07:51:07.834693] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.973 [2024-11-18 07:51:07.851889] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.973 NULL1 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=710102 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.973 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.974 07:51:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.232 07:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.232 07:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:15.232 07:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.232 07:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.232 07:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.490 07:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.490 07:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:15.490 07:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.490 07:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.490 07:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.057 07:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.057 07:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:16.057 07:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.057 07:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.057 07:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.314 07:51:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.314 07:51:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:16.314 07:51:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.314 07:51:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.314 07:51:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.571 07:51:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.572 07:51:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:16.572 07:51:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.572 07:51:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.572 07:51:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.831 07:51:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.831 07:51:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:16.831 07:51:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.831 07:51:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.831 07:51:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.089 07:51:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.089 07:51:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:17.089 07:51:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.089 07:51:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.089 07:51:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.657 07:51:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.658 07:51:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:17.658 07:51:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.658 07:51:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.658 07:51:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.915 07:51:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.915 07:51:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:17.915 07:51:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.915 07:51:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.915 07:51:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.174 07:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.174 07:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:18.174 07:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.174 07:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.174 07:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.433 07:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.433 07:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:18.434 07:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.434 07:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.434 07:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.693 07:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.693 07:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:18.693 07:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.693 07:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.693 07:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.260 07:51:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.260 07:51:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:19.260 07:51:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.260 07:51:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.260 07:51:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.518 07:51:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.518 07:51:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:19.518 07:51:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.518 07:51:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.518 07:51:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.776 07:51:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.776 07:51:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:19.776 07:51:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.776 07:51:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.776 07:51:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.036 07:51:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.036 07:51:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:20.036 07:51:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.036 07:51:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.036 07:51:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.331 07:51:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.331 07:51:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:20.331 07:51:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.331 07:51:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.331 07:51:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.616 07:51:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.616 07:51:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:20.616 07:51:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.616 07:51:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.616 07:51:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.189 07:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.189 07:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:21.189 07:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.189 07:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.189 07:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.448 07:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.448 07:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:21.448 07:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.448 07:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.448 07:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.708 07:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.708 07:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:21.708 07:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.708 07:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.708 07:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.968 07:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.968 07:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:21.968 07:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.968 07:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.968 07:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.227 07:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.227 07:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:22.227 07:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.227 07:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.227 07:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.794 07:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.794 07:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:22.794 07:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.794 07:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.794 07:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.053 07:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.053 07:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:23.053 07:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.053 07:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.053 07:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.313 07:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.313 07:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:23.313 07:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.313 07:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.313 07:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.573 07:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.573 07:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:23.573 07:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.573 07:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.573 07:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.832 07:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.832 07:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:23.832 07:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.832 07:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.832 07:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.400 07:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.400 07:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:24.400 07:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.400 07:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.400 07:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.659 07:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.659 07:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:24.659 07:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.659 07:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.659 07:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.920 07:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.920 07:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:24.920 07:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.920 07:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.920 07:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.179 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 710102 00:17:25.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (710102) - No such process 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 710102 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:25.179 rmmod nvme_tcp 00:17:25.179 rmmod nvme_fabrics 00:17:25.179 rmmod nvme_keyring 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 709988 ']' 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 709988 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 709988 ']' 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 709988 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.179 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 709988 00:17:25.438 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:25.438 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:25.438 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 709988' 00:17:25.438 killing process with pid 709988 00:17:25.438 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 709988 00:17:25.438 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 709988 00:17:25.438 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:25.438 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:25.438 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:25.438 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:25.438 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:25.438 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:25.438 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:25.438 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:25.438 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:25.438 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.438 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.438 07:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.973 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:27.973 00:17:27.973 real 0m15.436s 00:17:27.973 user 0m38.658s 00:17:27.973 sys 0m5.972s 00:17:27.973 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.973 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.973 ************************************ 00:17:27.973 END TEST nvmf_connect_stress 00:17:27.973 ************************************ 00:17:27.973 07:51:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:27.973 07:51:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:27.973 07:51:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.973 07:51:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:27.973 ************************************ 00:17:27.973 START TEST nvmf_fused_ordering 00:17:27.973 ************************************ 00:17:27.973 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:27.973 * Looking for test storage... 00:17:27.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:27.973 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:27.973 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:17:27.973 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:27.973 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:27.973 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:27.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.974 --rc genhtml_branch_coverage=1 00:17:27.974 --rc genhtml_function_coverage=1 00:17:27.974 --rc genhtml_legend=1 00:17:27.974 --rc geninfo_all_blocks=1 00:17:27.974 --rc geninfo_unexecuted_blocks=1 00:17:27.974 00:17:27.974 ' 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:27.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.974 --rc genhtml_branch_coverage=1 00:17:27.974 --rc genhtml_function_coverage=1 00:17:27.974 --rc genhtml_legend=1 00:17:27.974 --rc geninfo_all_blocks=1 00:17:27.974 --rc geninfo_unexecuted_blocks=1 00:17:27.974 00:17:27.974 ' 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:27.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.974 --rc genhtml_branch_coverage=1 00:17:27.974 --rc genhtml_function_coverage=1 00:17:27.974 --rc genhtml_legend=1 00:17:27.974 --rc geninfo_all_blocks=1 00:17:27.974 --rc geninfo_unexecuted_blocks=1 00:17:27.974 00:17:27.974 ' 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:27.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.974 --rc genhtml_branch_coverage=1 00:17:27.974 --rc genhtml_function_coverage=1 00:17:27.974 --rc genhtml_legend=1 00:17:27.974 --rc geninfo_all_blocks=1 00:17:27.974 --rc geninfo_unexecuted_blocks=1 00:17:27.974 00:17:27.974 ' 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:27.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.974 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:27.975 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:27.975 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:27.975 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.975 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.975 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.975 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:27.975 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:27.975 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:27.975 07:51:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:29.878 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:29.878 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:29.878 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:29.878 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:29.878 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:29.879 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:29.879 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:29.879 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:29.879 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:29.879 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.879 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:29.879 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:29.879 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:29.879 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:29.879 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:29.879 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:29.879 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:29.879 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.879 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:29.879 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:29.879 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:29.879 07:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:30.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:17:30.138 00:17:30.138 --- 10.0.0.2 ping statistics --- 00:17:30.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.138 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:30.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:17:30.138 00:17:30.138 --- 10.0.0.1 ping statistics --- 00:17:30.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.138 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=713537 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 713537 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 713537 ']' 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.138 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.138 [2024-11-18 07:51:23.219438] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:30.138 [2024-11-18 07:51:23.219531] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.398 [2024-11-18 07:51:23.291193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.398 [2024-11-18 07:51:23.335185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.398 [2024-11-18 07:51:23.335240] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.398 [2024-11-18 07:51:23.335260] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.398 [2024-11-18 07:51:23.335270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.398 [2024-11-18 07:51:23.335279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.398 [2024-11-18 07:51:23.335866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.398 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.398 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:30.398 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:30.398 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:30.398 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.398 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.398 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:30.398 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.398 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.398 [2024-11-18 07:51:23.472546] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.398 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.398 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:30.398 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.398 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.398 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.398 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:30.398 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.399 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.659 [2024-11-18 07:51:23.488773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.659 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.659 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:30.659 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.659 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.659 NULL1 00:17:30.659 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.659 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:30.659 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.659 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.659 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.659 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:30.659 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.659 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.659 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.659 07:51:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:30.659 [2024-11-18 07:51:23.532615] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:30.659 [2024-11-18 07:51:23.532651] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid713591 ] 00:17:30.919 Attached to nqn.2016-06.io.spdk:cnode1 00:17:30.919 Namespace ID: 1 size: 1GB 00:17:30.919 fused_ordering(0) 00:17:30.919 fused_ordering(1) 00:17:30.919 fused_ordering(2) 00:17:30.919 fused_ordering(3) 00:17:30.919 fused_ordering(4) 00:17:30.919 fused_ordering(5) 00:17:30.919 fused_ordering(6) 00:17:30.919 fused_ordering(7) 00:17:30.919 fused_ordering(8) 00:17:30.919 fused_ordering(9) 00:17:30.919 fused_ordering(10) 00:17:30.919 fused_ordering(11) 00:17:30.919 fused_ordering(12) 00:17:30.919 fused_ordering(13) 00:17:30.919 fused_ordering(14) 00:17:30.919 fused_ordering(15) 00:17:30.919 fused_ordering(16) 00:17:30.919 fused_ordering(17) 00:17:30.919 fused_ordering(18) 00:17:30.919 fused_ordering(19) 00:17:30.919 fused_ordering(20) 00:17:30.919 fused_ordering(21) 00:17:30.919 fused_ordering(22) 00:17:30.919 fused_ordering(23) 00:17:30.919 fused_ordering(24) 00:17:30.919 fused_ordering(25) 00:17:30.919 fused_ordering(26) 00:17:30.919 fused_ordering(27) 00:17:30.919 fused_ordering(28) 00:17:30.919 fused_ordering(29) 00:17:30.919 fused_ordering(30) 00:17:30.919 fused_ordering(31) 00:17:30.919 fused_ordering(32) 00:17:30.919 fused_ordering(33) 00:17:30.919 fused_ordering(34) 00:17:30.919 fused_ordering(35) 00:17:30.919 fused_ordering(36) 00:17:30.919 fused_ordering(37) 00:17:30.919 fused_ordering(38) 00:17:30.919 fused_ordering(39) 00:17:30.919 fused_ordering(40) 00:17:30.919 fused_ordering(41) 00:17:30.919 fused_ordering(42) 00:17:30.919 fused_ordering(43) 00:17:30.919 fused_ordering(44) 00:17:30.919 fused_ordering(45) 00:17:30.919 fused_ordering(46) 00:17:30.919 fused_ordering(47) 00:17:30.919 fused_ordering(48) 00:17:30.919 fused_ordering(49) 00:17:30.919 fused_ordering(50) 00:17:30.919 fused_ordering(51) 00:17:30.919 fused_ordering(52) 00:17:30.919 fused_ordering(53) 00:17:30.919 fused_ordering(54) 00:17:30.919 fused_ordering(55) 00:17:30.919 fused_ordering(56) 00:17:30.919 fused_ordering(57) 00:17:30.919 fused_ordering(58) 00:17:30.919 fused_ordering(59) 00:17:30.919 fused_ordering(60) 00:17:30.919 fused_ordering(61) 00:17:30.919 fused_ordering(62) 00:17:30.919 fused_ordering(63) 00:17:30.919 fused_ordering(64) 00:17:30.919 fused_ordering(65) 00:17:30.919 fused_ordering(66) 00:17:30.919 fused_ordering(67) 00:17:30.919 fused_ordering(68) 00:17:30.919 fused_ordering(69) 00:17:30.919 fused_ordering(70) 00:17:30.919 fused_ordering(71) 00:17:30.919 fused_ordering(72) 00:17:30.919 fused_ordering(73) 00:17:30.919 fused_ordering(74) 00:17:30.919 fused_ordering(75) 00:17:30.919 fused_ordering(76) 00:17:30.919 fused_ordering(77) 00:17:30.919 fused_ordering(78) 00:17:30.919 fused_ordering(79) 00:17:30.919 fused_ordering(80) 00:17:30.919 fused_ordering(81) 00:17:30.919 fused_ordering(82) 00:17:30.919 fused_ordering(83) 00:17:30.919 fused_ordering(84) 00:17:30.919 fused_ordering(85) 00:17:30.919 fused_ordering(86) 00:17:30.919 fused_ordering(87) 00:17:30.919 fused_ordering(88) 00:17:30.919 fused_ordering(89) 00:17:30.919 fused_ordering(90) 00:17:30.919 fused_ordering(91) 00:17:30.919 fused_ordering(92) 00:17:30.919 fused_ordering(93) 00:17:30.919 fused_ordering(94) 00:17:30.919 fused_ordering(95) 00:17:30.919 fused_ordering(96) 00:17:30.919 fused_ordering(97) 00:17:30.919 fused_ordering(98) 00:17:30.919 fused_ordering(99) 00:17:30.919 fused_ordering(100) 00:17:30.919 fused_ordering(101) 00:17:30.919 fused_ordering(102) 00:17:30.919 fused_ordering(103) 00:17:30.919 fused_ordering(104) 00:17:30.919 fused_ordering(105) 00:17:30.919 fused_ordering(106) 00:17:30.919 fused_ordering(107) 00:17:30.919 fused_ordering(108) 00:17:30.919 fused_ordering(109) 00:17:30.919 fused_ordering(110) 00:17:30.919 fused_ordering(111) 00:17:30.919 fused_ordering(112) 00:17:30.919 fused_ordering(113) 00:17:30.919 fused_ordering(114) 00:17:30.919 fused_ordering(115) 00:17:30.919 fused_ordering(116) 00:17:30.919 fused_ordering(117) 00:17:30.919 fused_ordering(118) 00:17:30.919 fused_ordering(119) 00:17:30.919 fused_ordering(120) 00:17:30.919 fused_ordering(121) 00:17:30.919 fused_ordering(122) 00:17:30.919 fused_ordering(123) 00:17:30.919 fused_ordering(124) 00:17:30.919 fused_ordering(125) 00:17:30.919 fused_ordering(126) 00:17:30.919 fused_ordering(127) 00:17:30.919 fused_ordering(128) 00:17:30.919 fused_ordering(129) 00:17:30.919 fused_ordering(130) 00:17:30.919 fused_ordering(131) 00:17:30.919 fused_ordering(132) 00:17:30.919 fused_ordering(133) 00:17:30.919 fused_ordering(134) 00:17:30.919 fused_ordering(135) 00:17:30.919 fused_ordering(136) 00:17:30.919 fused_ordering(137) 00:17:30.919 fused_ordering(138) 00:17:30.919 fused_ordering(139) 00:17:30.919 fused_ordering(140) 00:17:30.919 fused_ordering(141) 00:17:30.919 fused_ordering(142) 00:17:30.919 fused_ordering(143) 00:17:30.919 fused_ordering(144) 00:17:30.919 fused_ordering(145) 00:17:30.919 fused_ordering(146) 00:17:30.919 fused_ordering(147) 00:17:30.919 fused_ordering(148) 00:17:30.919 fused_ordering(149) 00:17:30.919 fused_ordering(150) 00:17:30.919 fused_ordering(151) 00:17:30.919 fused_ordering(152) 00:17:30.919 fused_ordering(153) 00:17:30.919 fused_ordering(154) 00:17:30.920 fused_ordering(155) 00:17:30.920 fused_ordering(156) 00:17:30.920 fused_ordering(157) 00:17:30.920 fused_ordering(158) 00:17:30.920 fused_ordering(159) 00:17:30.920 fused_ordering(160) 00:17:30.920 fused_ordering(161) 00:17:30.920 fused_ordering(162) 00:17:30.920 fused_ordering(163) 00:17:30.920 fused_ordering(164) 00:17:30.920 fused_ordering(165) 00:17:30.920 fused_ordering(166) 00:17:30.920 fused_ordering(167) 00:17:30.920 fused_ordering(168) 00:17:30.920 fused_ordering(169) 00:17:30.920 fused_ordering(170) 00:17:30.920 fused_ordering(171) 00:17:30.920 fused_ordering(172) 00:17:30.920 fused_ordering(173) 00:17:30.920 fused_ordering(174) 00:17:30.920 fused_ordering(175) 00:17:30.920 fused_ordering(176) 00:17:30.920 fused_ordering(177) 00:17:30.920 fused_ordering(178) 00:17:30.920 fused_ordering(179) 00:17:30.920 fused_ordering(180) 00:17:30.920 fused_ordering(181) 00:17:30.920 fused_ordering(182) 00:17:30.920 fused_ordering(183) 00:17:30.920 fused_ordering(184) 00:17:30.920 fused_ordering(185) 00:17:30.920 fused_ordering(186) 00:17:30.920 fused_ordering(187) 00:17:30.920 fused_ordering(188) 00:17:30.920 fused_ordering(189) 00:17:30.920 fused_ordering(190) 00:17:30.920 fused_ordering(191) 00:17:30.920 fused_ordering(192) 00:17:30.920 fused_ordering(193) 00:17:30.920 fused_ordering(194) 00:17:30.920 fused_ordering(195) 00:17:30.920 fused_ordering(196) 00:17:30.920 fused_ordering(197) 00:17:30.920 fused_ordering(198) 00:17:30.920 fused_ordering(199) 00:17:30.920 fused_ordering(200) 00:17:30.920 fused_ordering(201) 00:17:30.920 fused_ordering(202) 00:17:30.920 fused_ordering(203) 00:17:30.920 fused_ordering(204) 00:17:30.920 fused_ordering(205) 00:17:31.491 fused_ordering(206) 00:17:31.491 fused_ordering(207) 00:17:31.491 fused_ordering(208) 00:17:31.491 fused_ordering(209) 00:17:31.491 fused_ordering(210) 00:17:31.491 fused_ordering(211) 00:17:31.491 fused_ordering(212) 00:17:31.491 fused_ordering(213) 00:17:31.491 fused_ordering(214) 00:17:31.491 fused_ordering(215) 00:17:31.491 fused_ordering(216) 00:17:31.491 fused_ordering(217) 00:17:31.491 fused_ordering(218) 00:17:31.491 fused_ordering(219) 00:17:31.491 fused_ordering(220) 00:17:31.491 fused_ordering(221) 00:17:31.491 fused_ordering(222) 00:17:31.491 fused_ordering(223) 00:17:31.491 fused_ordering(224) 00:17:31.491 fused_ordering(225) 00:17:31.491 fused_ordering(226) 00:17:31.491 fused_ordering(227) 00:17:31.491 fused_ordering(228) 00:17:31.491 fused_ordering(229) 00:17:31.491 fused_ordering(230) 00:17:31.491 fused_ordering(231) 00:17:31.491 fused_ordering(232) 00:17:31.491 fused_ordering(233) 00:17:31.491 fused_ordering(234) 00:17:31.491 fused_ordering(235) 00:17:31.491 fused_ordering(236) 00:17:31.491 fused_ordering(237) 00:17:31.491 fused_ordering(238) 00:17:31.491 fused_ordering(239) 00:17:31.491 fused_ordering(240) 00:17:31.491 fused_ordering(241) 00:17:31.491 fused_ordering(242) 00:17:31.491 fused_ordering(243) 00:17:31.491 fused_ordering(244) 00:17:31.491 fused_ordering(245) 00:17:31.491 fused_ordering(246) 00:17:31.491 fused_ordering(247) 00:17:31.491 fused_ordering(248) 00:17:31.491 fused_ordering(249) 00:17:31.491 fused_ordering(250) 00:17:31.491 fused_ordering(251) 00:17:31.491 fused_ordering(252) 00:17:31.491 fused_ordering(253) 00:17:31.491 fused_ordering(254) 00:17:31.491 fused_ordering(255) 00:17:31.491 fused_ordering(256) 00:17:31.491 fused_ordering(257) 00:17:31.491 fused_ordering(258) 00:17:31.491 fused_ordering(259) 00:17:31.491 fused_ordering(260) 00:17:31.491 fused_ordering(261) 00:17:31.491 fused_ordering(262) 00:17:31.491 fused_ordering(263) 00:17:31.491 fused_ordering(264) 00:17:31.491 fused_ordering(265) 00:17:31.491 fused_ordering(266) 00:17:31.491 fused_ordering(267) 00:17:31.491 fused_ordering(268) 00:17:31.491 fused_ordering(269) 00:17:31.491 fused_ordering(270) 00:17:31.491 fused_ordering(271) 00:17:31.491 fused_ordering(272) 00:17:31.491 fused_ordering(273) 00:17:31.491 fused_ordering(274) 00:17:31.491 fused_ordering(275) 00:17:31.491 fused_ordering(276) 00:17:31.491 fused_ordering(277) 00:17:31.491 fused_ordering(278) 00:17:31.491 fused_ordering(279) 00:17:31.491 fused_ordering(280) 00:17:31.491 fused_ordering(281) 00:17:31.491 fused_ordering(282) 00:17:31.491 fused_ordering(283) 00:17:31.491 fused_ordering(284) 00:17:31.491 fused_ordering(285) 00:17:31.491 fused_ordering(286) 00:17:31.491 fused_ordering(287) 00:17:31.491 fused_ordering(288) 00:17:31.491 fused_ordering(289) 00:17:31.491 fused_ordering(290) 00:17:31.491 fused_ordering(291) 00:17:31.491 fused_ordering(292) 00:17:31.491 fused_ordering(293) 00:17:31.491 fused_ordering(294) 00:17:31.491 fused_ordering(295) 00:17:31.491 fused_ordering(296) 00:17:31.491 fused_ordering(297) 00:17:31.491 fused_ordering(298) 00:17:31.491 fused_ordering(299) 00:17:31.491 fused_ordering(300) 00:17:31.491 fused_ordering(301) 00:17:31.491 fused_ordering(302) 00:17:31.491 fused_ordering(303) 00:17:31.491 fused_ordering(304) 00:17:31.491 fused_ordering(305) 00:17:31.491 fused_ordering(306) 00:17:31.491 fused_ordering(307) 00:17:31.491 fused_ordering(308) 00:17:31.491 fused_ordering(309) 00:17:31.491 fused_ordering(310) 00:17:31.491 fused_ordering(311) 00:17:31.491 fused_ordering(312) 00:17:31.491 fused_ordering(313) 00:17:31.491 fused_ordering(314) 00:17:31.491 fused_ordering(315) 00:17:31.491 fused_ordering(316) 00:17:31.491 fused_ordering(317) 00:17:31.491 fused_ordering(318) 00:17:31.491 fused_ordering(319) 00:17:31.491 fused_ordering(320) 00:17:31.491 fused_ordering(321) 00:17:31.491 fused_ordering(322) 00:17:31.491 fused_ordering(323) 00:17:31.491 fused_ordering(324) 00:17:31.491 fused_ordering(325) 00:17:31.491 fused_ordering(326) 00:17:31.491 fused_ordering(327) 00:17:31.491 fused_ordering(328) 00:17:31.491 fused_ordering(329) 00:17:31.491 fused_ordering(330) 00:17:31.491 fused_ordering(331) 00:17:31.491 fused_ordering(332) 00:17:31.491 fused_ordering(333) 00:17:31.491 fused_ordering(334) 00:17:31.491 fused_ordering(335) 00:17:31.491 fused_ordering(336) 00:17:31.491 fused_ordering(337) 00:17:31.491 fused_ordering(338) 00:17:31.491 fused_ordering(339) 00:17:31.491 fused_ordering(340) 00:17:31.491 fused_ordering(341) 00:17:31.491 fused_ordering(342) 00:17:31.491 fused_ordering(343) 00:17:31.491 fused_ordering(344) 00:17:31.491 fused_ordering(345) 00:17:31.492 fused_ordering(346) 00:17:31.492 fused_ordering(347) 00:17:31.492 fused_ordering(348) 00:17:31.492 fused_ordering(349) 00:17:31.492 fused_ordering(350) 00:17:31.492 fused_ordering(351) 00:17:31.492 fused_ordering(352) 00:17:31.492 fused_ordering(353) 00:17:31.492 fused_ordering(354) 00:17:31.492 fused_ordering(355) 00:17:31.492 fused_ordering(356) 00:17:31.492 fused_ordering(357) 00:17:31.492 fused_ordering(358) 00:17:31.492 fused_ordering(359) 00:17:31.492 fused_ordering(360) 00:17:31.492 fused_ordering(361) 00:17:31.492 fused_ordering(362) 00:17:31.492 fused_ordering(363) 00:17:31.492 fused_ordering(364) 00:17:31.492 fused_ordering(365) 00:17:31.492 fused_ordering(366) 00:17:31.492 fused_ordering(367) 00:17:31.492 fused_ordering(368) 00:17:31.492 fused_ordering(369) 00:17:31.492 fused_ordering(370) 00:17:31.492 fused_ordering(371) 00:17:31.492 fused_ordering(372) 00:17:31.492 fused_ordering(373) 00:17:31.492 fused_ordering(374) 00:17:31.492 fused_ordering(375) 00:17:31.492 fused_ordering(376) 00:17:31.492 fused_ordering(377) 00:17:31.492 fused_ordering(378) 00:17:31.492 fused_ordering(379) 00:17:31.492 fused_ordering(380) 00:17:31.492 fused_ordering(381) 00:17:31.492 fused_ordering(382) 00:17:31.492 fused_ordering(383) 00:17:31.492 fused_ordering(384) 00:17:31.492 fused_ordering(385) 00:17:31.492 fused_ordering(386) 00:17:31.492 fused_ordering(387) 00:17:31.492 fused_ordering(388) 00:17:31.492 fused_ordering(389) 00:17:31.492 fused_ordering(390) 00:17:31.492 fused_ordering(391) 00:17:31.492 fused_ordering(392) 00:17:31.492 fused_ordering(393) 00:17:31.492 fused_ordering(394) 00:17:31.492 fused_ordering(395) 00:17:31.492 fused_ordering(396) 00:17:31.492 fused_ordering(397) 00:17:31.492 fused_ordering(398) 00:17:31.492 fused_ordering(399) 00:17:31.492 fused_ordering(400) 00:17:31.492 fused_ordering(401) 00:17:31.492 fused_ordering(402) 00:17:31.492 fused_ordering(403) 00:17:31.492 fused_ordering(404) 00:17:31.492 fused_ordering(405) 00:17:31.492 fused_ordering(406) 00:17:31.492 fused_ordering(407) 00:17:31.492 fused_ordering(408) 00:17:31.492 fused_ordering(409) 00:17:31.492 fused_ordering(410) 00:17:31.753 fused_ordering(411) 00:17:31.753 fused_ordering(412) 00:17:31.753 fused_ordering(413) 00:17:31.753 fused_ordering(414) 00:17:31.753 fused_ordering(415) 00:17:31.753 fused_ordering(416) 00:17:31.753 fused_ordering(417) 00:17:31.753 fused_ordering(418) 00:17:31.753 fused_ordering(419) 00:17:31.753 fused_ordering(420) 00:17:31.753 fused_ordering(421) 00:17:31.753 fused_ordering(422) 00:17:31.753 fused_ordering(423) 00:17:31.753 fused_ordering(424) 00:17:31.753 fused_ordering(425) 00:17:31.753 fused_ordering(426) 00:17:31.753 fused_ordering(427) 00:17:31.753 fused_ordering(428) 00:17:31.753 fused_ordering(429) 00:17:31.753 fused_ordering(430) 00:17:31.753 fused_ordering(431) 00:17:31.753 fused_ordering(432) 00:17:31.753 fused_ordering(433) 00:17:31.753 fused_ordering(434) 00:17:31.753 fused_ordering(435) 00:17:31.753 fused_ordering(436) 00:17:31.753 fused_ordering(437) 00:17:31.753 fused_ordering(438) 00:17:31.753 fused_ordering(439) 00:17:31.753 fused_ordering(440) 00:17:31.753 fused_ordering(441) 00:17:31.753 fused_ordering(442) 00:17:31.753 fused_ordering(443) 00:17:31.753 fused_ordering(444) 00:17:31.753 fused_ordering(445) 00:17:31.753 fused_ordering(446) 00:17:31.753 fused_ordering(447) 00:17:31.753 fused_ordering(448) 00:17:31.753 fused_ordering(449) 00:17:31.753 fused_ordering(450) 00:17:31.753 fused_ordering(451) 00:17:31.753 fused_ordering(452) 00:17:31.753 fused_ordering(453) 00:17:31.753 fused_ordering(454) 00:17:31.753 fused_ordering(455) 00:17:31.753 fused_ordering(456) 00:17:31.753 fused_ordering(457) 00:17:31.753 fused_ordering(458) 00:17:31.753 fused_ordering(459) 00:17:31.753 fused_ordering(460) 00:17:31.753 fused_ordering(461) 00:17:31.753 fused_ordering(462) 00:17:31.753 fused_ordering(463) 00:17:31.753 fused_ordering(464) 00:17:31.753 fused_ordering(465) 00:17:31.753 fused_ordering(466) 00:17:31.753 fused_ordering(467) 00:17:31.753 fused_ordering(468) 00:17:31.753 fused_ordering(469) 00:17:31.753 fused_ordering(470) 00:17:31.753 fused_ordering(471) 00:17:31.753 fused_ordering(472) 00:17:31.753 fused_ordering(473) 00:17:31.753 fused_ordering(474) 00:17:31.753 fused_ordering(475) 00:17:31.753 fused_ordering(476) 00:17:31.753 fused_ordering(477) 00:17:31.753 fused_ordering(478) 00:17:31.753 fused_ordering(479) 00:17:31.753 fused_ordering(480) 00:17:31.753 fused_ordering(481) 00:17:31.753 fused_ordering(482) 00:17:31.753 fused_ordering(483) 00:17:31.753 fused_ordering(484) 00:17:31.753 fused_ordering(485) 00:17:31.753 fused_ordering(486) 00:17:31.753 fused_ordering(487) 00:17:31.753 fused_ordering(488) 00:17:31.753 fused_ordering(489) 00:17:31.753 fused_ordering(490) 00:17:31.753 fused_ordering(491) 00:17:31.753 fused_ordering(492) 00:17:31.753 fused_ordering(493) 00:17:31.753 fused_ordering(494) 00:17:31.753 fused_ordering(495) 00:17:31.753 fused_ordering(496) 00:17:31.753 fused_ordering(497) 00:17:31.753 fused_ordering(498) 00:17:31.753 fused_ordering(499) 00:17:31.753 fused_ordering(500) 00:17:31.753 fused_ordering(501) 00:17:31.753 fused_ordering(502) 00:17:31.753 fused_ordering(503) 00:17:31.753 fused_ordering(504) 00:17:31.753 fused_ordering(505) 00:17:31.753 fused_ordering(506) 00:17:31.753 fused_ordering(507) 00:17:31.753 fused_ordering(508) 00:17:31.753 fused_ordering(509) 00:17:31.753 fused_ordering(510) 00:17:31.753 fused_ordering(511) 00:17:31.753 fused_ordering(512) 00:17:31.753 fused_ordering(513) 00:17:31.753 fused_ordering(514) 00:17:31.753 fused_ordering(515) 00:17:31.753 fused_ordering(516) 00:17:31.753 fused_ordering(517) 00:17:31.753 fused_ordering(518) 00:17:31.753 fused_ordering(519) 00:17:31.753 fused_ordering(520) 00:17:31.753 fused_ordering(521) 00:17:31.753 fused_ordering(522) 00:17:31.753 fused_ordering(523) 00:17:31.753 fused_ordering(524) 00:17:31.753 fused_ordering(525) 00:17:31.753 fused_ordering(526) 00:17:31.753 fused_ordering(527) 00:17:31.753 fused_ordering(528) 00:17:31.753 fused_ordering(529) 00:17:31.753 fused_ordering(530) 00:17:31.753 fused_ordering(531) 00:17:31.753 fused_ordering(532) 00:17:31.753 fused_ordering(533) 00:17:31.753 fused_ordering(534) 00:17:31.753 fused_ordering(535) 00:17:31.753 fused_ordering(536) 00:17:31.753 fused_ordering(537) 00:17:31.753 fused_ordering(538) 00:17:31.753 fused_ordering(539) 00:17:31.753 fused_ordering(540) 00:17:31.753 fused_ordering(541) 00:17:31.753 fused_ordering(542) 00:17:31.753 fused_ordering(543) 00:17:31.753 fused_ordering(544) 00:17:31.753 fused_ordering(545) 00:17:31.753 fused_ordering(546) 00:17:31.753 fused_ordering(547) 00:17:31.753 fused_ordering(548) 00:17:31.753 fused_ordering(549) 00:17:31.753 fused_ordering(550) 00:17:31.753 fused_ordering(551) 00:17:31.753 fused_ordering(552) 00:17:31.753 fused_ordering(553) 00:17:31.753 fused_ordering(554) 00:17:31.753 fused_ordering(555) 00:17:31.753 fused_ordering(556) 00:17:31.753 fused_ordering(557) 00:17:31.753 fused_ordering(558) 00:17:31.753 fused_ordering(559) 00:17:31.753 fused_ordering(560) 00:17:31.753 fused_ordering(561) 00:17:31.753 fused_ordering(562) 00:17:31.753 fused_ordering(563) 00:17:31.753 fused_ordering(564) 00:17:31.753 fused_ordering(565) 00:17:31.753 fused_ordering(566) 00:17:31.753 fused_ordering(567) 00:17:31.753 fused_ordering(568) 00:17:31.753 fused_ordering(569) 00:17:31.753 fused_ordering(570) 00:17:31.753 fused_ordering(571) 00:17:31.753 fused_ordering(572) 00:17:31.753 fused_ordering(573) 00:17:31.753 fused_ordering(574) 00:17:31.753 fused_ordering(575) 00:17:31.753 fused_ordering(576) 00:17:31.753 fused_ordering(577) 00:17:31.753 fused_ordering(578) 00:17:31.753 fused_ordering(579) 00:17:31.753 fused_ordering(580) 00:17:31.753 fused_ordering(581) 00:17:31.753 fused_ordering(582) 00:17:31.753 fused_ordering(583) 00:17:31.753 fused_ordering(584) 00:17:31.753 fused_ordering(585) 00:17:31.753 fused_ordering(586) 00:17:31.753 fused_ordering(587) 00:17:31.753 fused_ordering(588) 00:17:31.753 fused_ordering(589) 00:17:31.753 fused_ordering(590) 00:17:31.753 fused_ordering(591) 00:17:31.753 fused_ordering(592) 00:17:31.753 fused_ordering(593) 00:17:31.753 fused_ordering(594) 00:17:31.753 fused_ordering(595) 00:17:31.753 fused_ordering(596) 00:17:31.753 fused_ordering(597) 00:17:31.753 fused_ordering(598) 00:17:31.753 fused_ordering(599) 00:17:31.753 fused_ordering(600) 00:17:31.753 fused_ordering(601) 00:17:31.753 fused_ordering(602) 00:17:31.753 fused_ordering(603) 00:17:31.753 fused_ordering(604) 00:17:31.753 fused_ordering(605) 00:17:31.753 fused_ordering(606) 00:17:31.753 fused_ordering(607) 00:17:31.753 fused_ordering(608) 00:17:31.753 fused_ordering(609) 00:17:31.753 fused_ordering(610) 00:17:31.753 fused_ordering(611) 00:17:31.753 fused_ordering(612) 00:17:31.753 fused_ordering(613) 00:17:31.753 fused_ordering(614) 00:17:31.753 fused_ordering(615) 00:17:32.322 fused_ordering(616) 00:17:32.322 fused_ordering(617) 00:17:32.322 fused_ordering(618) 00:17:32.322 fused_ordering(619) 00:17:32.322 fused_ordering(620) 00:17:32.322 fused_ordering(621) 00:17:32.322 fused_ordering(622) 00:17:32.322 fused_ordering(623) 00:17:32.322 fused_ordering(624) 00:17:32.322 fused_ordering(625) 00:17:32.322 fused_ordering(626) 00:17:32.322 fused_ordering(627) 00:17:32.322 fused_ordering(628) 00:17:32.322 fused_ordering(629) 00:17:32.322 fused_ordering(630) 00:17:32.322 fused_ordering(631) 00:17:32.322 fused_ordering(632) 00:17:32.322 fused_ordering(633) 00:17:32.322 fused_ordering(634) 00:17:32.322 fused_ordering(635) 00:17:32.322 fused_ordering(636) 00:17:32.322 fused_ordering(637) 00:17:32.322 fused_ordering(638) 00:17:32.322 fused_ordering(639) 00:17:32.322 fused_ordering(640) 00:17:32.322 fused_ordering(641) 00:17:32.322 fused_ordering(642) 00:17:32.322 fused_ordering(643) 00:17:32.322 fused_ordering(644) 00:17:32.322 fused_ordering(645) 00:17:32.322 fused_ordering(646) 00:17:32.322 fused_ordering(647) 00:17:32.322 fused_ordering(648) 00:17:32.322 fused_ordering(649) 00:17:32.322 fused_ordering(650) 00:17:32.322 fused_ordering(651) 00:17:32.322 fused_ordering(652) 00:17:32.322 fused_ordering(653) 00:17:32.322 fused_ordering(654) 00:17:32.322 fused_ordering(655) 00:17:32.322 fused_ordering(656) 00:17:32.322 fused_ordering(657) 00:17:32.322 fused_ordering(658) 00:17:32.322 fused_ordering(659) 00:17:32.322 fused_ordering(660) 00:17:32.322 fused_ordering(661) 00:17:32.322 fused_ordering(662) 00:17:32.322 fused_ordering(663) 00:17:32.322 fused_ordering(664) 00:17:32.322 fused_ordering(665) 00:17:32.322 fused_ordering(666) 00:17:32.322 fused_ordering(667) 00:17:32.322 fused_ordering(668) 00:17:32.322 fused_ordering(669) 00:17:32.322 fused_ordering(670) 00:17:32.322 fused_ordering(671) 00:17:32.322 fused_ordering(672) 00:17:32.322 fused_ordering(673) 00:17:32.322 fused_ordering(674) 00:17:32.322 fused_ordering(675) 00:17:32.322 fused_ordering(676) 00:17:32.322 fused_ordering(677) 00:17:32.322 fused_ordering(678) 00:17:32.322 fused_ordering(679) 00:17:32.322 fused_ordering(680) 00:17:32.322 fused_ordering(681) 00:17:32.322 fused_ordering(682) 00:17:32.322 fused_ordering(683) 00:17:32.322 fused_ordering(684) 00:17:32.322 fused_ordering(685) 00:17:32.322 fused_ordering(686) 00:17:32.322 fused_ordering(687) 00:17:32.322 fused_ordering(688) 00:17:32.322 fused_ordering(689) 00:17:32.322 fused_ordering(690) 00:17:32.322 fused_ordering(691) 00:17:32.322 fused_ordering(692) 00:17:32.322 fused_ordering(693) 00:17:32.322 fused_ordering(694) 00:17:32.322 fused_ordering(695) 00:17:32.322 fused_ordering(696) 00:17:32.322 fused_ordering(697) 00:17:32.322 fused_ordering(698) 00:17:32.322 fused_ordering(699) 00:17:32.322 fused_ordering(700) 00:17:32.322 fused_ordering(701) 00:17:32.322 fused_ordering(702) 00:17:32.322 fused_ordering(703) 00:17:32.322 fused_ordering(704) 00:17:32.322 fused_ordering(705) 00:17:32.322 fused_ordering(706) 00:17:32.322 fused_ordering(707) 00:17:32.322 fused_ordering(708) 00:17:32.322 fused_ordering(709) 00:17:32.322 fused_ordering(710) 00:17:32.322 fused_ordering(711) 00:17:32.322 fused_ordering(712) 00:17:32.322 fused_ordering(713) 00:17:32.322 fused_ordering(714) 00:17:32.322 fused_ordering(715) 00:17:32.322 fused_ordering(716) 00:17:32.322 fused_ordering(717) 00:17:32.322 fused_ordering(718) 00:17:32.322 fused_ordering(719) 00:17:32.322 fused_ordering(720) 00:17:32.322 fused_ordering(721) 00:17:32.322 fused_ordering(722) 00:17:32.322 fused_ordering(723) 00:17:32.322 fused_ordering(724) 00:17:32.322 fused_ordering(725) 00:17:32.322 fused_ordering(726) 00:17:32.322 fused_ordering(727) 00:17:32.322 fused_ordering(728) 00:17:32.322 fused_ordering(729) 00:17:32.322 fused_ordering(730) 00:17:32.322 fused_ordering(731) 00:17:32.322 fused_ordering(732) 00:17:32.322 fused_ordering(733) 00:17:32.322 fused_ordering(734) 00:17:32.322 fused_ordering(735) 00:17:32.322 fused_ordering(736) 00:17:32.322 fused_ordering(737) 00:17:32.322 fused_ordering(738) 00:17:32.322 fused_ordering(739) 00:17:32.322 fused_ordering(740) 00:17:32.322 fused_ordering(741) 00:17:32.322 fused_ordering(742) 00:17:32.322 fused_ordering(743) 00:17:32.322 fused_ordering(744) 00:17:32.322 fused_ordering(745) 00:17:32.322 fused_ordering(746) 00:17:32.322 fused_ordering(747) 00:17:32.322 fused_ordering(748) 00:17:32.322 fused_ordering(749) 00:17:32.322 fused_ordering(750) 00:17:32.322 fused_ordering(751) 00:17:32.322 fused_ordering(752) 00:17:32.322 fused_ordering(753) 00:17:32.322 fused_ordering(754) 00:17:32.322 fused_ordering(755) 00:17:32.322 fused_ordering(756) 00:17:32.322 fused_ordering(757) 00:17:32.322 fused_ordering(758) 00:17:32.322 fused_ordering(759) 00:17:32.322 fused_ordering(760) 00:17:32.322 fused_ordering(761) 00:17:32.322 fused_ordering(762) 00:17:32.322 fused_ordering(763) 00:17:32.322 fused_ordering(764) 00:17:32.322 fused_ordering(765) 00:17:32.322 fused_ordering(766) 00:17:32.322 fused_ordering(767) 00:17:32.322 fused_ordering(768) 00:17:32.322 fused_ordering(769) 00:17:32.322 fused_ordering(770) 00:17:32.322 fused_ordering(771) 00:17:32.322 fused_ordering(772) 00:17:32.322 fused_ordering(773) 00:17:32.322 fused_ordering(774) 00:17:32.322 fused_ordering(775) 00:17:32.322 fused_ordering(776) 00:17:32.322 fused_ordering(777) 00:17:32.322 fused_ordering(778) 00:17:32.322 fused_ordering(779) 00:17:32.322 fused_ordering(780) 00:17:32.322 fused_ordering(781) 00:17:32.322 fused_ordering(782) 00:17:32.322 fused_ordering(783) 00:17:32.322 fused_ordering(784) 00:17:32.322 fused_ordering(785) 00:17:32.322 fused_ordering(786) 00:17:32.322 fused_ordering(787) 00:17:32.322 fused_ordering(788) 00:17:32.322 fused_ordering(789) 00:17:32.322 fused_ordering(790) 00:17:32.322 fused_ordering(791) 00:17:32.322 fused_ordering(792) 00:17:32.322 fused_ordering(793) 00:17:32.322 fused_ordering(794) 00:17:32.322 fused_ordering(795) 00:17:32.322 fused_ordering(796) 00:17:32.322 fused_ordering(797) 00:17:32.322 fused_ordering(798) 00:17:32.322 fused_ordering(799) 00:17:32.322 fused_ordering(800) 00:17:32.322 fused_ordering(801) 00:17:32.322 fused_ordering(802) 00:17:32.322 fused_ordering(803) 00:17:32.322 fused_ordering(804) 00:17:32.322 fused_ordering(805) 00:17:32.322 fused_ordering(806) 00:17:32.322 fused_ordering(807) 00:17:32.322 fused_ordering(808) 00:17:32.322 fused_ordering(809) 00:17:32.322 fused_ordering(810) 00:17:32.322 fused_ordering(811) 00:17:32.322 fused_ordering(812) 00:17:32.322 fused_ordering(813) 00:17:32.323 fused_ordering(814) 00:17:32.323 fused_ordering(815) 00:17:32.323 fused_ordering(816) 00:17:32.323 fused_ordering(817) 00:17:32.323 fused_ordering(818) 00:17:32.323 fused_ordering(819) 00:17:32.323 fused_ordering(820) 00:17:32.909 fused_ordering(821) 00:17:32.909 fused_ordering(822) 00:17:32.909 fused_ordering(823) 00:17:32.909 fused_ordering(824) 00:17:32.909 fused_ordering(825) 00:17:32.909 fused_ordering(826) 00:17:32.909 fused_ordering(827) 00:17:32.909 fused_ordering(828) 00:17:32.909 fused_ordering(829) 00:17:32.909 fused_ordering(830) 00:17:32.909 fused_ordering(831) 00:17:32.909 fused_ordering(832) 00:17:32.909 fused_ordering(833) 00:17:32.909 fused_ordering(834) 00:17:32.909 fused_ordering(835) 00:17:32.909 fused_ordering(836) 00:17:32.909 fused_ordering(837) 00:17:32.909 fused_ordering(838) 00:17:32.909 fused_ordering(839) 00:17:32.909 fused_ordering(840) 00:17:32.909 fused_ordering(841) 00:17:32.909 fused_ordering(842) 00:17:32.909 fused_ordering(843) 00:17:32.909 fused_ordering(844) 00:17:32.909 fused_ordering(845) 00:17:32.909 fused_ordering(846) 00:17:32.909 fused_ordering(847) 00:17:32.909 fused_ordering(848) 00:17:32.909 fused_ordering(849) 00:17:32.909 fused_ordering(850) 00:17:32.909 fused_ordering(851) 00:17:32.909 fused_ordering(852) 00:17:32.909 fused_ordering(853) 00:17:32.909 fused_ordering(854) 00:17:32.909 fused_ordering(855) 00:17:32.909 fused_ordering(856) 00:17:32.909 fused_ordering(857) 00:17:32.909 fused_ordering(858) 00:17:32.909 fused_ordering(859) 00:17:32.909 fused_ordering(860) 00:17:32.909 fused_ordering(861) 00:17:32.909 fused_ordering(862) 00:17:32.909 fused_ordering(863) 00:17:32.909 fused_ordering(864) 00:17:32.909 fused_ordering(865) 00:17:32.909 fused_ordering(866) 00:17:32.909 fused_ordering(867) 00:17:32.909 fused_ordering(868) 00:17:32.909 fused_ordering(869) 00:17:32.909 fused_ordering(870) 00:17:32.909 fused_ordering(871) 00:17:32.909 fused_ordering(872) 00:17:32.909 fused_ordering(873) 00:17:32.909 fused_ordering(874) 00:17:32.909 fused_ordering(875) 00:17:32.909 fused_ordering(876) 00:17:32.909 fused_ordering(877) 00:17:32.909 fused_ordering(878) 00:17:32.909 fused_ordering(879) 00:17:32.909 fused_ordering(880) 00:17:32.909 fused_ordering(881) 00:17:32.909 fused_ordering(882) 00:17:32.909 fused_ordering(883) 00:17:32.909 fused_ordering(884) 00:17:32.909 fused_ordering(885) 00:17:32.909 fused_ordering(886) 00:17:32.909 fused_ordering(887) 00:17:32.909 fused_ordering(888) 00:17:32.909 fused_ordering(889) 00:17:32.909 fused_ordering(890) 00:17:32.909 fused_ordering(891) 00:17:32.909 fused_ordering(892) 00:17:32.909 fused_ordering(893) 00:17:32.909 fused_ordering(894) 00:17:32.909 fused_ordering(895) 00:17:32.909 fused_ordering(896) 00:17:32.909 fused_ordering(897) 00:17:32.909 fused_ordering(898) 00:17:32.909 fused_ordering(899) 00:17:32.909 fused_ordering(900) 00:17:32.909 fused_ordering(901) 00:17:32.909 fused_ordering(902) 00:17:32.909 fused_ordering(903) 00:17:32.909 fused_ordering(904) 00:17:32.909 fused_ordering(905) 00:17:32.909 fused_ordering(906) 00:17:32.909 fused_ordering(907) 00:17:32.909 fused_ordering(908) 00:17:32.909 fused_ordering(909) 00:17:32.909 fused_ordering(910) 00:17:32.909 fused_ordering(911) 00:17:32.909 fused_ordering(912) 00:17:32.909 fused_ordering(913) 00:17:32.909 fused_ordering(914) 00:17:32.909 fused_ordering(915) 00:17:32.909 fused_ordering(916) 00:17:32.909 fused_ordering(917) 00:17:32.909 fused_ordering(918) 00:17:32.909 fused_ordering(919) 00:17:32.909 fused_ordering(920) 00:17:32.909 fused_ordering(921) 00:17:32.909 fused_ordering(922) 00:17:32.909 fused_ordering(923) 00:17:32.909 fused_ordering(924) 00:17:32.909 fused_ordering(925) 00:17:32.909 fused_ordering(926) 00:17:32.909 fused_ordering(927) 00:17:32.909 fused_ordering(928) 00:17:32.909 fused_ordering(929) 00:17:32.909 fused_ordering(930) 00:17:32.909 fused_ordering(931) 00:17:32.909 fused_ordering(932) 00:17:32.909 fused_ordering(933) 00:17:32.909 fused_ordering(934) 00:17:32.909 fused_ordering(935) 00:17:32.909 fused_ordering(936) 00:17:32.909 fused_ordering(937) 00:17:32.909 fused_ordering(938) 00:17:32.909 fused_ordering(939) 00:17:32.909 fused_ordering(940) 00:17:32.909 fused_ordering(941) 00:17:32.909 fused_ordering(942) 00:17:32.909 fused_ordering(943) 00:17:32.909 fused_ordering(944) 00:17:32.909 fused_ordering(945) 00:17:32.909 fused_ordering(946) 00:17:32.909 fused_ordering(947) 00:17:32.909 fused_ordering(948) 00:17:32.909 fused_ordering(949) 00:17:32.909 fused_ordering(950) 00:17:32.909 fused_ordering(951) 00:17:32.909 fused_ordering(952) 00:17:32.909 fused_ordering(953) 00:17:32.909 fused_ordering(954) 00:17:32.909 fused_ordering(955) 00:17:32.909 fused_ordering(956) 00:17:32.909 fused_ordering(957) 00:17:32.909 fused_ordering(958) 00:17:32.909 fused_ordering(959) 00:17:32.909 fused_ordering(960) 00:17:32.909 fused_ordering(961) 00:17:32.909 fused_ordering(962) 00:17:32.909 fused_ordering(963) 00:17:32.909 fused_ordering(964) 00:17:32.909 fused_ordering(965) 00:17:32.909 fused_ordering(966) 00:17:32.909 fused_ordering(967) 00:17:32.909 fused_ordering(968) 00:17:32.909 fused_ordering(969) 00:17:32.909 fused_ordering(970) 00:17:32.909 fused_ordering(971) 00:17:32.909 fused_ordering(972) 00:17:32.909 fused_ordering(973) 00:17:32.909 fused_ordering(974) 00:17:32.909 fused_ordering(975) 00:17:32.909 fused_ordering(976) 00:17:32.909 fused_ordering(977) 00:17:32.909 fused_ordering(978) 00:17:32.909 fused_ordering(979) 00:17:32.909 fused_ordering(980) 00:17:32.909 fused_ordering(981) 00:17:32.909 fused_ordering(982) 00:17:32.909 fused_ordering(983) 00:17:32.909 fused_ordering(984) 00:17:32.909 fused_ordering(985) 00:17:32.909 fused_ordering(986) 00:17:32.909 fused_ordering(987) 00:17:32.909 fused_ordering(988) 00:17:32.909 fused_ordering(989) 00:17:32.909 fused_ordering(990) 00:17:32.909 fused_ordering(991) 00:17:32.909 fused_ordering(992) 00:17:32.909 fused_ordering(993) 00:17:32.909 fused_ordering(994) 00:17:32.909 fused_ordering(995) 00:17:32.909 fused_ordering(996) 00:17:32.909 fused_ordering(997) 00:17:32.909 fused_ordering(998) 00:17:32.909 fused_ordering(999) 00:17:32.909 fused_ordering(1000) 00:17:32.909 fused_ordering(1001) 00:17:32.909 fused_ordering(1002) 00:17:32.909 fused_ordering(1003) 00:17:32.909 fused_ordering(1004) 00:17:32.909 fused_ordering(1005) 00:17:32.909 fused_ordering(1006) 00:17:32.909 fused_ordering(1007) 00:17:32.909 fused_ordering(1008) 00:17:32.909 fused_ordering(1009) 00:17:32.910 fused_ordering(1010) 00:17:32.910 fused_ordering(1011) 00:17:32.910 fused_ordering(1012) 00:17:32.910 fused_ordering(1013) 00:17:32.910 fused_ordering(1014) 00:17:32.910 fused_ordering(1015) 00:17:32.910 fused_ordering(1016) 00:17:32.910 fused_ordering(1017) 00:17:32.910 fused_ordering(1018) 00:17:32.910 fused_ordering(1019) 00:17:32.910 fused_ordering(1020) 00:17:32.910 fused_ordering(1021) 00:17:32.910 fused_ordering(1022) 00:17:32.910 fused_ordering(1023) 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:32.910 rmmod nvme_tcp 00:17:32.910 rmmod nvme_fabrics 00:17:32.910 rmmod nvme_keyring 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 713537 ']' 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 713537 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 713537 ']' 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 713537 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 713537 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 713537' 00:17:32.910 killing process with pid 713537 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 713537 00:17:32.910 07:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 713537 00:17:33.169 07:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:33.169 07:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:33.169 07:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:33.169 07:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:33.169 07:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:33.169 07:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:33.169 07:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:33.169 07:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:33.169 07:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:33.169 07:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.169 07:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.169 07:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:35.709 00:17:35.709 real 0m7.597s 00:17:35.709 user 0m5.032s 00:17:35.709 sys 0m3.177s 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:35.709 ************************************ 00:17:35.709 END TEST nvmf_fused_ordering 00:17:35.709 ************************************ 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:35.709 ************************************ 00:17:35.709 START TEST nvmf_ns_masking 00:17:35.709 ************************************ 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:35.709 * Looking for test storage... 00:17:35.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:35.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.709 --rc genhtml_branch_coverage=1 00:17:35.709 --rc genhtml_function_coverage=1 00:17:35.709 --rc genhtml_legend=1 00:17:35.709 --rc geninfo_all_blocks=1 00:17:35.709 --rc geninfo_unexecuted_blocks=1 00:17:35.709 00:17:35.709 ' 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:35.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.709 --rc genhtml_branch_coverage=1 00:17:35.709 --rc genhtml_function_coverage=1 00:17:35.709 --rc genhtml_legend=1 00:17:35.709 --rc geninfo_all_blocks=1 00:17:35.709 --rc geninfo_unexecuted_blocks=1 00:17:35.709 00:17:35.709 ' 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:35.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.709 --rc genhtml_branch_coverage=1 00:17:35.709 --rc genhtml_function_coverage=1 00:17:35.709 --rc genhtml_legend=1 00:17:35.709 --rc geninfo_all_blocks=1 00:17:35.709 --rc geninfo_unexecuted_blocks=1 00:17:35.709 00:17:35.709 ' 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:35.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.709 --rc genhtml_branch_coverage=1 00:17:35.709 --rc genhtml_function_coverage=1 00:17:35.709 --rc genhtml_legend=1 00:17:35.709 --rc geninfo_all_blocks=1 00:17:35.709 --rc geninfo_unexecuted_blocks=1 00:17:35.709 00:17:35.709 ' 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.709 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:35.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=0da20402-8ae3-49b6-a5d1-36f88d197785 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=f8ecb06b-2afb-4dfe-a718-ab96fee00d5b 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=872fbd56-b7c4-4a11-86a0-3e15f5ffc57c 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:35.710 07:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:37.623 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:37.624 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:37.624 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:37.624 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:37.624 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:37.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:17:37.624 00:17:37.624 --- 10.0.0.2 ping statistics --- 00:17:37.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.624 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:37.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:17:37.624 00:17:37.624 --- 10.0.0.1 ping statistics --- 00:17:37.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.624 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=715800 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 715800 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 715800 ']' 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.624 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.625 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.625 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:37.883 [2024-11-18 07:51:30.759590] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:37.883 [2024-11-18 07:51:30.759673] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.883 [2024-11-18 07:51:30.837329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.883 [2024-11-18 07:51:30.881467] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.883 [2024-11-18 07:51:30.881528] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.883 [2024-11-18 07:51:30.881552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.883 [2024-11-18 07:51:30.881564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.883 [2024-11-18 07:51:30.881574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.883 [2024-11-18 07:51:30.882144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.142 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.142 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:38.142 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:38.142 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:38.142 07:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:38.142 07:51:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.142 07:51:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:38.400 [2024-11-18 07:51:31.320852] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.400 07:51:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:38.400 07:51:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:38.400 07:51:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:38.660 Malloc1 00:17:38.660 07:51:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:38.919 Malloc2 00:17:38.919 07:51:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:39.485 07:51:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:39.485 07:51:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.745 [2024-11-18 07:51:32.822961] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.004 07:51:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:40.004 07:51:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 872fbd56-b7c4-4a11-86a0-3e15f5ffc57c -a 10.0.0.2 -s 4420 -i 4 00:17:40.004 07:51:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:40.004 07:51:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:40.004 07:51:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:40.004 07:51:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:40.004 07:51:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:42.545 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:42.545 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:42.545 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:42.545 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:42.545 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:42.545 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:42.545 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:42.545 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:42.545 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:42.545 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:42.545 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:42.545 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:42.545 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:42.545 [ 0]:0x1 00:17:42.545 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:42.545 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:42.545 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6118a446cfc64cd98735dcc906a4fda3 00:17:42.546 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6118a446cfc64cd98735dcc906a4fda3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:42.546 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:42.546 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:42.546 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:42.546 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:42.546 [ 0]:0x1 00:17:42.546 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:42.546 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:42.546 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6118a446cfc64cd98735dcc906a4fda3 00:17:42.546 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6118a446cfc64cd98735dcc906a4fda3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:42.546 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:42.546 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:42.546 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:42.546 [ 1]:0x2 00:17:42.546 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:42.546 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:42.546 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f5b39a07d1b94dc08ff450bc062d7514 00:17:42.546 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f5b39a07d1b94dc08ff450bc062d7514 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:42.546 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:42.546 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:42.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.804 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:43.064 07:51:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:43.323 07:51:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:43.323 07:51:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 872fbd56-b7c4-4a11-86a0-3e15f5ffc57c -a 10.0.0.2 -s 4420 -i 4 00:17:43.583 07:51:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:43.583 07:51:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:43.583 07:51:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:43.583 07:51:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:43.583 07:51:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:43.583 07:51:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:45.490 [ 0]:0x2 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f5b39a07d1b94dc08ff450bc062d7514 00:17:45.490 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f5b39a07d1b94dc08ff450bc062d7514 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:45.491 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:45.749 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:45.749 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:45.749 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:45.749 [ 0]:0x1 00:17:45.749 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:45.749 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.007 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6118a446cfc64cd98735dcc906a4fda3 00:17:46.008 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6118a446cfc64cd98735dcc906a4fda3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.008 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:46.008 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.008 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:46.008 [ 1]:0x2 00:17:46.008 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:46.008 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.008 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f5b39a07d1b94dc08ff450bc062d7514 00:17:46.008 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f5b39a07d1b94dc08ff450bc062d7514 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.008 07:51:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:46.267 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:46.267 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:46.267 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:46.267 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:46.267 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:46.267 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:46.267 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:46.267 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:46.267 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.267 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:46.267 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:46.267 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.267 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:46.268 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.268 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:46.268 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:46.268 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:46.268 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:46.268 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:46.268 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.268 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:46.268 [ 0]:0x2 00:17:46.268 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:46.268 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.268 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f5b39a07d1b94dc08ff450bc062d7514 00:17:46.268 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f5b39a07d1b94dc08ff450bc062d7514 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.268 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:46.268 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:46.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:46.529 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:46.787 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:46.787 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 872fbd56-b7c4-4a11-86a0-3e15f5ffc57c -a 10.0.0.2 -s 4420 -i 4 00:17:46.787 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:46.787 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:46.787 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:46.787 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:46.787 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:46.787 07:51:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:49.327 07:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:49.327 07:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:49.327 07:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:49.327 07:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:49.327 07:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:49.327 07:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:49.327 07:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:49.327 07:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:49.327 07:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:49.327 07:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:49.327 07:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:49.327 07:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:49.327 07:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:49.327 [ 0]:0x1 00:17:49.327 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:49.327 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:49.327 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6118a446cfc64cd98735dcc906a4fda3 00:17:49.327 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6118a446cfc64cd98735dcc906a4fda3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:49.327 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:49.327 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:49.327 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:49.327 [ 1]:0x2 00:17:49.327 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:49.327 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:49.327 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f5b39a07d1b94dc08ff450bc062d7514 00:17:49.327 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f5b39a07d1b94dc08ff450bc062d7514 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:49.327 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:49.586 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:49.586 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:49.586 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:49.586 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:49.586 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.586 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:49.586 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.586 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:49.586 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:49.586 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:49.586 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:49.586 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:49.586 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:49.586 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:49.586 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:49.586 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:49.586 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:49.586 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:49.586 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:49.586 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:49.586 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:49.586 [ 0]:0x2 00:17:49.587 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:49.587 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:49.587 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f5b39a07d1b94dc08ff450bc062d7514 00:17:49.587 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f5b39a07d1b94dc08ff450bc062d7514 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:49.587 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:49.587 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:49.587 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:49.587 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:49.587 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.587 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:49.587 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.587 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:49.587 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.587 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:49.587 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:49.587 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:49.847 [2024-11-18 07:51:42.860997] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:49.847 request: 00:17:49.847 { 00:17:49.847 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.847 "nsid": 2, 00:17:49.847 "host": "nqn.2016-06.io.spdk:host1", 00:17:49.847 "method": "nvmf_ns_remove_host", 00:17:49.847 "req_id": 1 00:17:49.847 } 00:17:49.847 Got JSON-RPC error response 00:17:49.847 response: 00:17:49.847 { 00:17:49.847 "code": -32602, 00:17:49.847 "message": "Invalid parameters" 00:17:49.847 } 00:17:49.847 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:49.847 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:49.847 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:49.847 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:49.847 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:49.847 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:49.847 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:49.847 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:49.847 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.847 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:49.847 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.847 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:49.847 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:49.847 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:49.847 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:49.847 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.107 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:50.107 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.107 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:50.107 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:50.107 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:50.107 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:50.107 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:50.107 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:50.107 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:50.107 [ 0]:0x2 00:17:50.107 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:50.107 07:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.107 07:51:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f5b39a07d1b94dc08ff450bc062d7514 00:17:50.107 07:51:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f5b39a07d1b94dc08ff450bc062d7514 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.107 07:51:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:50.107 07:51:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:50.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.107 07:51:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=717420 00:17:50.107 07:51:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:50.107 07:51:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.107 07:51:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 717420 /var/tmp/host.sock 00:17:50.107 07:51:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 717420 ']' 00:17:50.107 07:51:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:50.107 07:51:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.107 07:51:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:50.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:50.107 07:51:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.107 07:51:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:50.368 [2024-11-18 07:51:43.219026] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:50.368 [2024-11-18 07:51:43.219122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid717420 ] 00:17:50.368 [2024-11-18 07:51:43.288463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.368 [2024-11-18 07:51:43.335730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.628 07:51:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.628 07:51:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:50.628 07:51:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:50.886 07:51:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:51.452 07:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 0da20402-8ae3-49b6-a5d1-36f88d197785 00:17:51.452 07:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:51.452 07:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0DA204028AE349B6A5D136F88D197785 -i 00:17:51.711 07:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid f8ecb06b-2afb-4dfe-a718-ab96fee00d5b 00:17:51.711 07:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:51.711 07:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g F8ECB06B2AFB4DFEA718AB96FEE00D5B -i 00:17:51.969 07:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:52.228 07:51:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:52.487 07:51:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:52.487 07:51:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:52.746 nvme0n1 00:17:52.746 07:51:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:52.746 07:51:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:53.316 nvme1n2 00:17:53.316 07:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:53.316 07:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:53.316 07:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:53.316 07:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:53.316 07:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:53.574 07:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:53.574 07:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:53.574 07:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:53.574 07:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:53.832 07:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 0da20402-8ae3-49b6-a5d1-36f88d197785 == \0\d\a\2\0\4\0\2\-\8\a\e\3\-\4\9\b\6\-\a\5\d\1\-\3\6\f\8\8\d\1\9\7\7\8\5 ]] 00:17:53.832 07:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:53.832 07:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:53.832 07:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:54.125 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ f8ecb06b-2afb-4dfe-a718-ab96fee00d5b == \f\8\e\c\b\0\6\b\-\2\a\f\b\-\4\d\f\e\-\a\7\1\8\-\a\b\9\6\f\e\e\0\0\d\5\b ]] 00:17:54.125 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:54.413 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:54.671 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 0da20402-8ae3-49b6-a5d1-36f88d197785 00:17:54.671 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:54.671 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 0DA204028AE349B6A5D136F88D197785 00:17:54.671 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:54.671 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 0DA204028AE349B6A5D136F88D197785 00:17:54.671 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:54.671 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.671 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:54.671 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.671 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:54.671 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.671 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:54.671 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:54.671 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 0DA204028AE349B6A5D136F88D197785 00:17:54.929 [2024-11-18 07:51:47.935583] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:54.929 [2024-11-18 07:51:47.935624] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:54.929 [2024-11-18 07:51:47.935647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.929 request: 00:17:54.929 { 00:17:54.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.929 "namespace": { 00:17:54.929 "bdev_name": "invalid", 00:17:54.929 "nsid": 1, 00:17:54.929 "nguid": "0DA204028AE349B6A5D136F88D197785", 00:17:54.929 "no_auto_visible": false 00:17:54.929 }, 00:17:54.929 "method": "nvmf_subsystem_add_ns", 00:17:54.929 "req_id": 1 00:17:54.929 } 00:17:54.929 Got JSON-RPC error response 00:17:54.929 response: 00:17:54.929 { 00:17:54.929 "code": -32602, 00:17:54.929 "message": "Invalid parameters" 00:17:54.929 } 00:17:54.929 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:54.929 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:54.929 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:54.929 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:54.929 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 0da20402-8ae3-49b6-a5d1-36f88d197785 00:17:54.929 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:54.929 07:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0DA204028AE349B6A5D136F88D197785 -i 00:17:55.188 07:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:57.717 07:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:57.717 07:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:57.717 07:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:57.717 07:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:57.717 07:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 717420 00:17:57.717 07:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 717420 ']' 00:17:57.717 07:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 717420 00:17:57.717 07:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:57.717 07:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.717 07:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 717420 00:17:57.717 07:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:57.717 07:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:57.717 07:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 717420' 00:17:57.717 killing process with pid 717420 00:17:57.717 07:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 717420 00:17:57.717 07:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 717420 00:17:57.975 07:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.233 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:58.233 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:58.233 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:58.233 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:58.233 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:58.233 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:58.233 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:58.233 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:58.233 rmmod nvme_tcp 00:17:58.233 rmmod nvme_fabrics 00:17:58.491 rmmod nvme_keyring 00:17:58.491 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:58.491 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:58.491 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:58.491 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 715800 ']' 00:17:58.491 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 715800 00:17:58.491 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 715800 ']' 00:17:58.491 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 715800 00:17:58.491 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:58.491 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.491 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 715800 00:17:58.491 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:58.491 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:58.491 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 715800' 00:17:58.491 killing process with pid 715800 00:17:58.491 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 715800 00:17:58.491 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 715800 00:17:58.751 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:58.751 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:58.751 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:58.751 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:58.751 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:58.751 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:58.751 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:58.751 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:58.751 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:58.751 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.751 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:58.751 07:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.658 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:00.658 00:18:00.658 real 0m25.447s 00:18:00.658 user 0m36.999s 00:18:00.658 sys 0m4.714s 00:18:00.658 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.658 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:00.658 ************************************ 00:18:00.658 END TEST nvmf_ns_masking 00:18:00.658 ************************************ 00:18:00.658 07:51:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:00.658 07:51:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:00.658 07:51:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:00.658 07:51:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.658 07:51:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:00.658 ************************************ 00:18:00.658 START TEST nvmf_nvme_cli 00:18:00.658 ************************************ 00:18:00.658 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:00.918 * Looking for test storage... 00:18:00.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:00.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.918 --rc genhtml_branch_coverage=1 00:18:00.918 --rc genhtml_function_coverage=1 00:18:00.918 --rc genhtml_legend=1 00:18:00.918 --rc geninfo_all_blocks=1 00:18:00.918 --rc geninfo_unexecuted_blocks=1 00:18:00.918 00:18:00.918 ' 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:00.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.918 --rc genhtml_branch_coverage=1 00:18:00.918 --rc genhtml_function_coverage=1 00:18:00.918 --rc genhtml_legend=1 00:18:00.918 --rc geninfo_all_blocks=1 00:18:00.918 --rc geninfo_unexecuted_blocks=1 00:18:00.918 00:18:00.918 ' 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:00.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.918 --rc genhtml_branch_coverage=1 00:18:00.918 --rc genhtml_function_coverage=1 00:18:00.918 --rc genhtml_legend=1 00:18:00.918 --rc geninfo_all_blocks=1 00:18:00.918 --rc geninfo_unexecuted_blocks=1 00:18:00.918 00:18:00.918 ' 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:00.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.918 --rc genhtml_branch_coverage=1 00:18:00.918 --rc genhtml_function_coverage=1 00:18:00.918 --rc genhtml_legend=1 00:18:00.918 --rc geninfo_all_blocks=1 00:18:00.918 --rc geninfo_unexecuted_blocks=1 00:18:00.918 00:18:00.918 ' 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.918 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:00.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:00.919 07:51:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.454 07:51:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:03.454 07:51:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:03.454 07:51:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:03.454 07:51:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:03.454 07:51:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:03.454 07:51:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:03.454 07:51:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:03.454 07:51:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:03.454 07:51:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:03.454 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:03.454 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:03.454 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:03.454 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:03.454 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:03.454 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:03.454 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:03.454 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:03.454 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:03.455 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:03.455 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:03.455 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:03.455 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:03.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:18:03.455 00:18:03.455 --- 10.0.0.2 ping statistics --- 00:18:03.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.455 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:03.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:18:03.455 00:18:03.455 --- 10.0.0.1 ping statistics --- 00:18:03.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.455 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=720393 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:03.455 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 720393 00:18:03.456 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 720393 ']' 00:18:03.456 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.456 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:03.456 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.456 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:03.456 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.456 [2024-11-18 07:51:56.231709] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:18:03.456 [2024-11-18 07:51:56.231804] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.456 [2024-11-18 07:51:56.305755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:03.456 [2024-11-18 07:51:56.350716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.456 [2024-11-18 07:51:56.350801] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.456 [2024-11-18 07:51:56.350826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.456 [2024-11-18 07:51:56.350838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.456 [2024-11-18 07:51:56.350847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.456 [2024-11-18 07:51:56.352399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.456 [2024-11-18 07:51:56.352520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.456 [2024-11-18 07:51:56.352595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.456 [2024-11-18 07:51:56.352592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:03.456 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.456 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:03.456 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:03.456 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:03.456 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.456 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.456 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:03.456 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.456 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.456 [2024-11-18 07:51:56.495110] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.456 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.456 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:03.456 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.456 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.715 Malloc0 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.715 Malloc1 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.715 [2024-11-18 07:51:56.597357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:18:03.715 00:18:03.715 Discovery Log Number of Records 2, Generation counter 2 00:18:03.715 =====Discovery Log Entry 0====== 00:18:03.715 trtype: tcp 00:18:03.715 adrfam: ipv4 00:18:03.715 subtype: current discovery subsystem 00:18:03.715 treq: not required 00:18:03.715 portid: 0 00:18:03.715 trsvcid: 4420 00:18:03.715 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:03.715 traddr: 10.0.0.2 00:18:03.715 eflags: explicit discovery connections, duplicate discovery information 00:18:03.715 sectype: none 00:18:03.715 =====Discovery Log Entry 1====== 00:18:03.715 trtype: tcp 00:18:03.715 adrfam: ipv4 00:18:03.715 subtype: nvme subsystem 00:18:03.715 treq: not required 00:18:03.715 portid: 0 00:18:03.715 trsvcid: 4420 00:18:03.715 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:03.715 traddr: 10.0.0.2 00:18:03.715 eflags: none 00:18:03.715 sectype: none 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:03.715 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:03.716 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:03.716 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:03.716 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:03.716 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:03.716 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:03.716 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:03.716 07:51:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:04.648 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:04.648 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:04.648 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:04.648 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:04.648 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:04.648 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:06.547 /dev/nvme0n2 ]] 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:06.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:06.547 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:06.547 rmmod nvme_tcp 00:18:06.547 rmmod nvme_fabrics 00:18:06.547 rmmod nvme_keyring 00:18:06.805 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:06.805 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:06.805 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:06.805 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 720393 ']' 00:18:06.805 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 720393 00:18:06.805 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 720393 ']' 00:18:06.805 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 720393 00:18:06.805 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:06.805 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.805 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 720393 00:18:06.805 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:06.805 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:06.805 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 720393' 00:18:06.805 killing process with pid 720393 00:18:06.805 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 720393 00:18:06.805 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 720393 00:18:07.063 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:07.063 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:07.063 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:07.063 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:07.063 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:07.063 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:07.063 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:07.063 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:07.063 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:07.063 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.063 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:07.063 07:51:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.969 07:52:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:08.969 00:18:08.969 real 0m8.271s 00:18:08.969 user 0m14.998s 00:18:08.969 sys 0m2.316s 00:18:08.969 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.970 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:08.970 ************************************ 00:18:08.970 END TEST nvmf_nvme_cli 00:18:08.970 ************************************ 00:18:08.970 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:08.970 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:08.970 07:52:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:08.970 07:52:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.970 07:52:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:08.970 ************************************ 00:18:08.970 START TEST nvmf_vfio_user 00:18:08.970 ************************************ 00:18:08.970 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:09.229 * Looking for test storage... 00:18:09.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:09.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.229 --rc genhtml_branch_coverage=1 00:18:09.229 --rc genhtml_function_coverage=1 00:18:09.229 --rc genhtml_legend=1 00:18:09.229 --rc geninfo_all_blocks=1 00:18:09.229 --rc geninfo_unexecuted_blocks=1 00:18:09.229 00:18:09.229 ' 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:09.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.229 --rc genhtml_branch_coverage=1 00:18:09.229 --rc genhtml_function_coverage=1 00:18:09.229 --rc genhtml_legend=1 00:18:09.229 --rc geninfo_all_blocks=1 00:18:09.229 --rc geninfo_unexecuted_blocks=1 00:18:09.229 00:18:09.229 ' 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:09.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.229 --rc genhtml_branch_coverage=1 00:18:09.229 --rc genhtml_function_coverage=1 00:18:09.229 --rc genhtml_legend=1 00:18:09.229 --rc geninfo_all_blocks=1 00:18:09.229 --rc geninfo_unexecuted_blocks=1 00:18:09.229 00:18:09.229 ' 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:09.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.229 --rc genhtml_branch_coverage=1 00:18:09.229 --rc genhtml_function_coverage=1 00:18:09.229 --rc genhtml_legend=1 00:18:09.229 --rc geninfo_all_blocks=1 00:18:09.229 --rc geninfo_unexecuted_blocks=1 00:18:09.229 00:18:09.229 ' 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.229 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:09.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=721264 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 721264' 00:18:09.230 Process pid: 721264 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 721264 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 721264 ']' 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.230 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:09.230 [2024-11-18 07:52:02.253123] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:18:09.230 [2024-11-18 07:52:02.253203] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.488 [2024-11-18 07:52:02.323126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:09.488 [2024-11-18 07:52:02.368164] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.488 [2024-11-18 07:52:02.368222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.488 [2024-11-18 07:52:02.368235] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.488 [2024-11-18 07:52:02.368245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.488 [2024-11-18 07:52:02.368254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.488 [2024-11-18 07:52:02.369631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.488 [2024-11-18 07:52:02.369697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.488 [2024-11-18 07:52:02.369763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:09.488 [2024-11-18 07:52:02.369765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.488 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.488 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:09.488 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:10.859 07:52:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:10.859 07:52:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:10.859 07:52:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:10.859 07:52:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:10.859 07:52:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:10.859 07:52:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:11.118 Malloc1 00:18:11.118 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:11.378 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:11.636 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:11.894 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:11.894 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:11.894 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:12.152 Malloc2 00:18:12.409 07:52:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:12.666 07:52:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:12.924 07:52:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:13.184 07:52:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:13.184 07:52:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:13.184 07:52:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:13.184 07:52:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:13.184 07:52:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:13.184 07:52:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:13.184 [2024-11-18 07:52:06.077395] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:18:13.184 [2024-11-18 07:52:06.077437] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid721698 ] 00:18:13.184 [2024-11-18 07:52:06.127716] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:13.184 [2024-11-18 07:52:06.136957] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:13.184 [2024-11-18 07:52:06.136986] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6529b59000 00:18:13.184 [2024-11-18 07:52:06.137951] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:13.184 [2024-11-18 07:52:06.138939] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:13.184 [2024-11-18 07:52:06.139945] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:13.184 [2024-11-18 07:52:06.140952] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:13.184 [2024-11-18 07:52:06.141954] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:13.184 [2024-11-18 07:52:06.142956] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:13.184 [2024-11-18 07:52:06.143963] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:13.184 [2024-11-18 07:52:06.144971] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:13.184 [2024-11-18 07:52:06.145978] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:13.184 [2024-11-18 07:52:06.145998] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6528851000 00:18:13.184 [2024-11-18 07:52:06.147153] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:13.184 [2024-11-18 07:52:06.162833] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:13.184 [2024-11-18 07:52:06.162894] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:13.184 [2024-11-18 07:52:06.165084] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:13.184 [2024-11-18 07:52:06.165146] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:13.184 [2024-11-18 07:52:06.165241] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:13.184 [2024-11-18 07:52:06.165275] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:13.184 [2024-11-18 07:52:06.165286] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:13.184 [2024-11-18 07:52:06.166087] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:13.184 [2024-11-18 07:52:06.166107] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:13.184 [2024-11-18 07:52:06.166119] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:13.184 [2024-11-18 07:52:06.167094] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:13.184 [2024-11-18 07:52:06.167114] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:13.184 [2024-11-18 07:52:06.167128] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:13.184 [2024-11-18 07:52:06.168099] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:13.184 [2024-11-18 07:52:06.168122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:13.184 [2024-11-18 07:52:06.169103] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:13.184 [2024-11-18 07:52:06.169122] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:13.184 [2024-11-18 07:52:06.169131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:13.185 [2024-11-18 07:52:06.169142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:13.185 [2024-11-18 07:52:06.169252] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:13.185 [2024-11-18 07:52:06.169259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:13.185 [2024-11-18 07:52:06.169269] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:13.185 [2024-11-18 07:52:06.170114] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:13.185 [2024-11-18 07:52:06.171116] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:13.185 [2024-11-18 07:52:06.172125] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:13.185 [2024-11-18 07:52:06.173121] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:13.185 [2024-11-18 07:52:06.173217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:13.185 [2024-11-18 07:52:06.174135] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:13.185 [2024-11-18 07:52:06.174153] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:13.185 [2024-11-18 07:52:06.174162] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.174186] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:13.185 [2024-11-18 07:52:06.174200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.174229] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:13.185 [2024-11-18 07:52:06.174240] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:13.185 [2024-11-18 07:52:06.174246] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:13.185 [2024-11-18 07:52:06.174268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:13.185 [2024-11-18 07:52:06.174324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:13.185 [2024-11-18 07:52:06.174344] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:13.185 [2024-11-18 07:52:06.174352] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:13.185 [2024-11-18 07:52:06.174365] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:13.185 [2024-11-18 07:52:06.174375] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:13.185 [2024-11-18 07:52:06.174386] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:13.185 [2024-11-18 07:52:06.174396] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:13.185 [2024-11-18 07:52:06.174404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.174420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.174437] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:13.185 [2024-11-18 07:52:06.174456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:13.185 [2024-11-18 07:52:06.174474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.185 [2024-11-18 07:52:06.174486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.185 [2024-11-18 07:52:06.174522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.185 [2024-11-18 07:52:06.174546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.185 [2024-11-18 07:52:06.174554] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.174567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.174580] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:13.185 [2024-11-18 07:52:06.174595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:13.185 [2024-11-18 07:52:06.174611] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:13.185 [2024-11-18 07:52:06.174621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.174632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.174643] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.174655] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:13.185 [2024-11-18 07:52:06.174667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:13.185 [2024-11-18 07:52:06.174734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.174750] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.174768] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:13.185 [2024-11-18 07:52:06.174777] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:13.185 [2024-11-18 07:52:06.174783] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:13.185 [2024-11-18 07:52:06.174793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:13.185 [2024-11-18 07:52:06.174818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:13.185 [2024-11-18 07:52:06.174853] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:13.185 [2024-11-18 07:52:06.174874] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.174891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.174903] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:13.185 [2024-11-18 07:52:06.174911] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:13.185 [2024-11-18 07:52:06.174916] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:13.185 [2024-11-18 07:52:06.174926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:13.185 [2024-11-18 07:52:06.174952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:13.185 [2024-11-18 07:52:06.174977] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.174992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.175003] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:13.185 [2024-11-18 07:52:06.175011] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:13.185 [2024-11-18 07:52:06.175017] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:13.185 [2024-11-18 07:52:06.175026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:13.185 [2024-11-18 07:52:06.175040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:13.185 [2024-11-18 07:52:06.175055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.175066] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.175080] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.175091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.175099] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.175108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.175120] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:13.185 [2024-11-18 07:52:06.175128] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:13.185 [2024-11-18 07:52:06.175137] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:13.185 [2024-11-18 07:52:06.175166] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:13.185 [2024-11-18 07:52:06.175184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:13.185 [2024-11-18 07:52:06.175203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:13.185 [2024-11-18 07:52:06.175215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:13.185 [2024-11-18 07:52:06.175231] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:13.185 [2024-11-18 07:52:06.175242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:13.185 [2024-11-18 07:52:06.175257] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:13.185 [2024-11-18 07:52:06.175269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:13.185 [2024-11-18 07:52:06.175291] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:13.185 [2024-11-18 07:52:06.175301] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:13.185 [2024-11-18 07:52:06.175307] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:13.185 [2024-11-18 07:52:06.175312] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:13.185 [2024-11-18 07:52:06.175318] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:13.185 [2024-11-18 07:52:06.175327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:13.185 [2024-11-18 07:52:06.175338] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:13.185 [2024-11-18 07:52:06.175346] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:13.185 [2024-11-18 07:52:06.175351] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:13.185 [2024-11-18 07:52:06.175360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:13.185 [2024-11-18 07:52:06.175370] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:13.185 [2024-11-18 07:52:06.175378] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:13.185 [2024-11-18 07:52:06.175383] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:13.185 [2024-11-18 07:52:06.175392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:13.185 [2024-11-18 07:52:06.175404] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:13.185 [2024-11-18 07:52:06.175411] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:13.185 [2024-11-18 07:52:06.175417] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:13.185 [2024-11-18 07:52:06.175425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:13.185 [2024-11-18 07:52:06.175440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:13.185 [2024-11-18 07:52:06.175461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:13.185 [2024-11-18 07:52:06.175481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:13.185 [2024-11-18 07:52:06.175518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:13.185 ===================================================== 00:18:13.185 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:13.185 ===================================================== 00:18:13.185 Controller Capabilities/Features 00:18:13.185 ================================ 00:18:13.185 Vendor ID: 4e58 00:18:13.185 Subsystem Vendor ID: 4e58 00:18:13.185 Serial Number: SPDK1 00:18:13.185 Model Number: SPDK bdev Controller 00:18:13.185 Firmware Version: 25.01 00:18:13.185 Recommended Arb Burst: 6 00:18:13.185 IEEE OUI Identifier: 8d 6b 50 00:18:13.185 Multi-path I/O 00:18:13.185 May have multiple subsystem ports: Yes 00:18:13.185 May have multiple controllers: Yes 00:18:13.185 Associated with SR-IOV VF: No 00:18:13.185 Max Data Transfer Size: 131072 00:18:13.185 Max Number of Namespaces: 32 00:18:13.185 Max Number of I/O Queues: 127 00:18:13.185 NVMe Specification Version (VS): 1.3 00:18:13.185 NVMe Specification Version (Identify): 1.3 00:18:13.185 Maximum Queue Entries: 256 00:18:13.185 Contiguous Queues Required: Yes 00:18:13.185 Arbitration Mechanisms Supported 00:18:13.185 Weighted Round Robin: Not Supported 00:18:13.185 Vendor Specific: Not Supported 00:18:13.185 Reset Timeout: 15000 ms 00:18:13.185 Doorbell Stride: 4 bytes 00:18:13.185 NVM Subsystem Reset: Not Supported 00:18:13.185 Command Sets Supported 00:18:13.185 NVM Command Set: Supported 00:18:13.185 Boot Partition: Not Supported 00:18:13.185 Memory Page Size Minimum: 4096 bytes 00:18:13.185 Memory Page Size Maximum: 4096 bytes 00:18:13.185 Persistent Memory Region: Not Supported 00:18:13.185 Optional Asynchronous Events Supported 00:18:13.185 Namespace Attribute Notices: Supported 00:18:13.185 Firmware Activation Notices: Not Supported 00:18:13.185 ANA Change Notices: Not Supported 00:18:13.185 PLE Aggregate Log Change Notices: Not Supported 00:18:13.185 LBA Status Info Alert Notices: Not Supported 00:18:13.185 EGE Aggregate Log Change Notices: Not Supported 00:18:13.185 Normal NVM Subsystem Shutdown event: Not Supported 00:18:13.185 Zone Descriptor Change Notices: Not Supported 00:18:13.185 Discovery Log Change Notices: Not Supported 00:18:13.185 Controller Attributes 00:18:13.185 128-bit Host Identifier: Supported 00:18:13.185 Non-Operational Permissive Mode: Not Supported 00:18:13.185 NVM Sets: Not Supported 00:18:13.185 Read Recovery Levels: Not Supported 00:18:13.185 Endurance Groups: Not Supported 00:18:13.185 Predictable Latency Mode: Not Supported 00:18:13.185 Traffic Based Keep ALive: Not Supported 00:18:13.185 Namespace Granularity: Not Supported 00:18:13.185 SQ Associations: Not Supported 00:18:13.185 UUID List: Not Supported 00:18:13.185 Multi-Domain Subsystem: Not Supported 00:18:13.185 Fixed Capacity Management: Not Supported 00:18:13.185 Variable Capacity Management: Not Supported 00:18:13.185 Delete Endurance Group: Not Supported 00:18:13.185 Delete NVM Set: Not Supported 00:18:13.185 Extended LBA Formats Supported: Not Supported 00:18:13.185 Flexible Data Placement Supported: Not Supported 00:18:13.185 00:18:13.185 Controller Memory Buffer Support 00:18:13.185 ================================ 00:18:13.185 Supported: No 00:18:13.185 00:18:13.185 Persistent Memory Region Support 00:18:13.185 ================================ 00:18:13.185 Supported: No 00:18:13.185 00:18:13.185 Admin Command Set Attributes 00:18:13.185 ============================ 00:18:13.185 Security Send/Receive: Not Supported 00:18:13.185 Format NVM: Not Supported 00:18:13.185 Firmware Activate/Download: Not Supported 00:18:13.185 Namespace Management: Not Supported 00:18:13.185 Device Self-Test: Not Supported 00:18:13.185 Directives: Not Supported 00:18:13.185 NVMe-MI: Not Supported 00:18:13.185 Virtualization Management: Not Supported 00:18:13.185 Doorbell Buffer Config: Not Supported 00:18:13.185 Get LBA Status Capability: Not Supported 00:18:13.185 Command & Feature Lockdown Capability: Not Supported 00:18:13.185 Abort Command Limit: 4 00:18:13.185 Async Event Request Limit: 4 00:18:13.185 Number of Firmware Slots: N/A 00:18:13.185 Firmware Slot 1 Read-Only: N/A 00:18:13.185 Firmware Activation Without Reset: N/A 00:18:13.185 Multiple Update Detection Support: N/A 00:18:13.185 Firmware Update Granularity: No Information Provided 00:18:13.185 Per-Namespace SMART Log: No 00:18:13.185 Asymmetric Namespace Access Log Page: Not Supported 00:18:13.185 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:13.185 Command Effects Log Page: Supported 00:18:13.185 Get Log Page Extended Data: Supported 00:18:13.185 Telemetry Log Pages: Not Supported 00:18:13.185 Persistent Event Log Pages: Not Supported 00:18:13.186 Supported Log Pages Log Page: May Support 00:18:13.186 Commands Supported & Effects Log Page: Not Supported 00:18:13.186 Feature Identifiers & Effects Log Page:May Support 00:18:13.186 NVMe-MI Commands & Effects Log Page: May Support 00:18:13.186 Data Area 4 for Telemetry Log: Not Supported 00:18:13.186 Error Log Page Entries Supported: 128 00:18:13.186 Keep Alive: Supported 00:18:13.186 Keep Alive Granularity: 10000 ms 00:18:13.186 00:18:13.186 NVM Command Set Attributes 00:18:13.186 ========================== 00:18:13.186 Submission Queue Entry Size 00:18:13.186 Max: 64 00:18:13.186 Min: 64 00:18:13.186 Completion Queue Entry Size 00:18:13.186 Max: 16 00:18:13.186 Min: 16 00:18:13.186 Number of Namespaces: 32 00:18:13.186 Compare Command: Supported 00:18:13.186 Write Uncorrectable Command: Not Supported 00:18:13.186 Dataset Management Command: Supported 00:18:13.186 Write Zeroes Command: Supported 00:18:13.186 Set Features Save Field: Not Supported 00:18:13.186 Reservations: Not Supported 00:18:13.186 Timestamp: Not Supported 00:18:13.186 Copy: Supported 00:18:13.186 Volatile Write Cache: Present 00:18:13.186 Atomic Write Unit (Normal): 1 00:18:13.186 Atomic Write Unit (PFail): 1 00:18:13.186 Atomic Compare & Write Unit: 1 00:18:13.186 Fused Compare & Write: Supported 00:18:13.186 Scatter-Gather List 00:18:13.186 SGL Command Set: Supported (Dword aligned) 00:18:13.186 SGL Keyed: Not Supported 00:18:13.186 SGL Bit Bucket Descriptor: Not Supported 00:18:13.186 SGL Metadata Pointer: Not Supported 00:18:13.186 Oversized SGL: Not Supported 00:18:13.186 SGL Metadata Address: Not Supported 00:18:13.186 SGL Offset: Not Supported 00:18:13.186 Transport SGL Data Block: Not Supported 00:18:13.186 Replay Protected Memory Block: Not Supported 00:18:13.186 00:18:13.186 Firmware Slot Information 00:18:13.186 ========================= 00:18:13.186 Active slot: 1 00:18:13.186 Slot 1 Firmware Revision: 25.01 00:18:13.186 00:18:13.186 00:18:13.186 Commands Supported and Effects 00:18:13.186 ============================== 00:18:13.186 Admin Commands 00:18:13.186 -------------- 00:18:13.186 Get Log Page (02h): Supported 00:18:13.186 Identify (06h): Supported 00:18:13.186 Abort (08h): Supported 00:18:13.186 Set Features (09h): Supported 00:18:13.186 Get Features (0Ah): Supported 00:18:13.186 Asynchronous Event Request (0Ch): Supported 00:18:13.186 Keep Alive (18h): Supported 00:18:13.186 I/O Commands 00:18:13.186 ------------ 00:18:13.186 Flush (00h): Supported LBA-Change 00:18:13.186 Write (01h): Supported LBA-Change 00:18:13.186 Read (02h): Supported 00:18:13.186 Compare (05h): Supported 00:18:13.186 Write Zeroes (08h): Supported LBA-Change 00:18:13.186 Dataset Management (09h): Supported LBA-Change 00:18:13.186 Copy (19h): Supported LBA-Change 00:18:13.186 00:18:13.186 Error Log 00:18:13.186 ========= 00:18:13.186 00:18:13.186 Arbitration 00:18:13.186 =========== 00:18:13.186 Arbitration Burst: 1 00:18:13.186 00:18:13.186 Power Management 00:18:13.186 ================ 00:18:13.186 Number of Power States: 1 00:18:13.186 Current Power State: Power State #0 00:18:13.186 Power State #0: 00:18:13.186 Max Power: 0.00 W 00:18:13.186 Non-Operational State: Operational 00:18:13.186 Entry Latency: Not Reported 00:18:13.186 Exit Latency: Not Reported 00:18:13.186 Relative Read Throughput: 0 00:18:13.186 Relative Read Latency: 0 00:18:13.186 Relative Write Throughput: 0 00:18:13.186 Relative Write Latency: 0 00:18:13.186 Idle Power: Not Reported 00:18:13.186 Active Power: Not Reported 00:18:13.186 Non-Operational Permissive Mode: Not Supported 00:18:13.186 00:18:13.186 Health Information 00:18:13.186 ================== 00:18:13.186 Critical Warnings: 00:18:13.186 Available Spare Space: OK 00:18:13.186 Temperature: OK 00:18:13.186 Device Reliability: OK 00:18:13.186 Read Only: No 00:18:13.186 Volatile Memory Backup: OK 00:18:13.186 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:13.186 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:13.186 Available Spare: 0% 00:18:13.186 Available Sp[2024-11-18 07:52:06.175645] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:13.186 [2024-11-18 07:52:06.175662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:13.186 [2024-11-18 07:52:06.175704] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:13.186 [2024-11-18 07:52:06.175722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.186 [2024-11-18 07:52:06.175733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.186 [2024-11-18 07:52:06.175743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.186 [2024-11-18 07:52:06.175753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.186 [2024-11-18 07:52:06.178518] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:13.186 [2024-11-18 07:52:06.178545] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:13.186 [2024-11-18 07:52:06.179159] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:13.186 [2024-11-18 07:52:06.179235] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:13.186 [2024-11-18 07:52:06.179248] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:13.186 [2024-11-18 07:52:06.180173] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:13.186 [2024-11-18 07:52:06.180196] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:13.186 [2024-11-18 07:52:06.180254] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:13.186 [2024-11-18 07:52:06.182217] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:13.186 are Threshold: 0% 00:18:13.186 Life Percentage Used: 0% 00:18:13.186 Data Units Read: 0 00:18:13.186 Data Units Written: 0 00:18:13.186 Host Read Commands: 0 00:18:13.186 Host Write Commands: 0 00:18:13.186 Controller Busy Time: 0 minutes 00:18:13.186 Power Cycles: 0 00:18:13.186 Power On Hours: 0 hours 00:18:13.186 Unsafe Shutdowns: 0 00:18:13.186 Unrecoverable Media Errors: 0 00:18:13.186 Lifetime Error Log Entries: 0 00:18:13.186 Warning Temperature Time: 0 minutes 00:18:13.186 Critical Temperature Time: 0 minutes 00:18:13.186 00:18:13.186 Number of Queues 00:18:13.186 ================ 00:18:13.186 Number of I/O Submission Queues: 127 00:18:13.186 Number of I/O Completion Queues: 127 00:18:13.186 00:18:13.186 Active Namespaces 00:18:13.186 ================= 00:18:13.186 Namespace ID:1 00:18:13.186 Error Recovery Timeout: Unlimited 00:18:13.186 Command Set Identifier: NVM (00h) 00:18:13.186 Deallocate: Supported 00:18:13.186 Deallocated/Unwritten Error: Not Supported 00:18:13.186 Deallocated Read Value: Unknown 00:18:13.186 Deallocate in Write Zeroes: Not Supported 00:18:13.186 Deallocated Guard Field: 0xFFFF 00:18:13.186 Flush: Supported 00:18:13.186 Reservation: Supported 00:18:13.186 Namespace Sharing Capabilities: Multiple Controllers 00:18:13.186 Size (in LBAs): 131072 (0GiB) 00:18:13.186 Capacity (in LBAs): 131072 (0GiB) 00:18:13.186 Utilization (in LBAs): 131072 (0GiB) 00:18:13.186 NGUID: 30DCAE0437DA406FACFFFE4097CAD7C8 00:18:13.186 UUID: 30dcae04-37da-406f-acff-fe4097cad7c8 00:18:13.186 Thin Provisioning: Not Supported 00:18:13.186 Per-NS Atomic Units: Yes 00:18:13.186 Atomic Boundary Size (Normal): 0 00:18:13.186 Atomic Boundary Size (PFail): 0 00:18:13.186 Atomic Boundary Offset: 0 00:18:13.186 Maximum Single Source Range Length: 65535 00:18:13.186 Maximum Copy Length: 65535 00:18:13.186 Maximum Source Range Count: 1 00:18:13.186 NGUID/EUI64 Never Reused: No 00:18:13.186 Namespace Write Protected: No 00:18:13.186 Number of LBA Formats: 1 00:18:13.186 Current LBA Format: LBA Format #00 00:18:13.186 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:13.186 00:18:13.186 07:52:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:13.444 [2024-11-18 07:52:06.434406] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:18.709 Initializing NVMe Controllers 00:18:18.709 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:18.709 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:18.709 Initialization complete. Launching workers. 00:18:18.709 ======================================================== 00:18:18.709 Latency(us) 00:18:18.709 Device Information : IOPS MiB/s Average min max 00:18:18.709 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32895.50 128.50 3890.52 1167.76 7640.03 00:18:18.709 ======================================================== 00:18:18.709 Total : 32895.50 128.50 3890.52 1167.76 7640.03 00:18:18.709 00:18:18.709 [2024-11-18 07:52:11.459933] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:18.709 07:52:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:18.709 [2024-11-18 07:52:11.713106] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:23.985 Initializing NVMe Controllers 00:18:23.985 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:23.985 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:23.985 Initialization complete. Launching workers. 00:18:23.985 ======================================================== 00:18:23.985 Latency(us) 00:18:23.985 Device Information : IOPS MiB/s Average min max 00:18:23.985 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16000.00 62.50 8008.19 6983.53 15961.04 00:18:23.985 ======================================================== 00:18:23.985 Total : 16000.00 62.50 8008.19 6983.53 15961.04 00:18:23.985 00:18:23.985 [2024-11-18 07:52:16.748762] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:23.985 07:52:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:23.985 [2024-11-18 07:52:16.978874] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:29.259 [2024-11-18 07:52:22.044783] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:29.259 Initializing NVMe Controllers 00:18:29.259 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:29.259 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:29.259 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:29.259 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:29.259 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:29.259 Initialization complete. Launching workers. 00:18:29.259 Starting thread on core 2 00:18:29.259 Starting thread on core 3 00:18:29.259 Starting thread on core 1 00:18:29.259 07:52:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:29.519 [2024-11-18 07:52:22.368961] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:32.808 [2024-11-18 07:52:25.439483] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:32.808 Initializing NVMe Controllers 00:18:32.808 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:32.808 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:32.808 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:32.808 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:32.808 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:32.808 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:32.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:32.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:32.808 Initialization complete. Launching workers. 00:18:32.808 Starting thread on core 1 with urgent priority queue 00:18:32.808 Starting thread on core 2 with urgent priority queue 00:18:32.809 Starting thread on core 3 with urgent priority queue 00:18:32.809 Starting thread on core 0 with urgent priority queue 00:18:32.809 SPDK bdev Controller (SPDK1 ) core 0: 4593.33 IO/s 21.77 secs/100000 ios 00:18:32.809 SPDK bdev Controller (SPDK1 ) core 1: 4617.67 IO/s 21.66 secs/100000 ios 00:18:32.809 SPDK bdev Controller (SPDK1 ) core 2: 5063.00 IO/s 19.75 secs/100000 ios 00:18:32.809 SPDK bdev Controller (SPDK1 ) core 3: 5138.33 IO/s 19.46 secs/100000 ios 00:18:32.809 ======================================================== 00:18:32.809 00:18:32.809 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:32.809 [2024-11-18 07:52:25.752015] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:32.809 Initializing NVMe Controllers 00:18:32.809 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:32.809 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:32.809 Namespace ID: 1 size: 0GB 00:18:32.809 Initialization complete. 00:18:32.809 INFO: using host memory buffer for IO 00:18:32.809 Hello world! 00:18:32.809 [2024-11-18 07:52:25.785677] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:32.809 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:33.069 [2024-11-18 07:52:26.104964] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:34.449 Initializing NVMe Controllers 00:18:34.449 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:34.449 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:34.449 Initialization complete. Launching workers. 00:18:34.449 submit (in ns) avg, min, max = 8168.9, 3506.7, 4025325.6 00:18:34.449 complete (in ns) avg, min, max = 26571.2, 2064.4, 7011563.3 00:18:34.449 00:18:34.449 Submit histogram 00:18:34.449 ================ 00:18:34.449 Range in us Cumulative Count 00:18:34.449 3.484 - 3.508: 0.0079% ( 1) 00:18:34.449 3.508 - 3.532: 0.1809% ( 22) 00:18:34.449 3.532 - 3.556: 1.0304% ( 108) 00:18:34.449 3.556 - 3.579: 3.3978% ( 301) 00:18:34.449 3.579 - 3.603: 8.1800% ( 608) 00:18:34.449 3.603 - 3.627: 14.8419% ( 847) 00:18:34.449 3.627 - 3.650: 22.9825% ( 1035) 00:18:34.449 3.650 - 3.674: 29.9355% ( 884) 00:18:34.449 3.674 - 3.698: 36.6132% ( 849) 00:18:34.449 3.698 - 3.721: 43.6920% ( 900) 00:18:34.449 3.721 - 3.745: 50.1258% ( 818) 00:18:34.449 3.745 - 3.769: 55.3170% ( 660) 00:18:34.449 3.769 - 3.793: 59.7530% ( 564) 00:18:34.449 3.793 - 3.816: 63.2924% ( 450) 00:18:34.449 3.816 - 3.840: 66.9577% ( 466) 00:18:34.449 3.840 - 3.864: 71.2207% ( 542) 00:18:34.449 3.864 - 3.887: 75.6017% ( 557) 00:18:34.449 3.887 - 3.911: 79.3063% ( 471) 00:18:34.449 3.911 - 3.935: 82.6490% ( 425) 00:18:34.449 3.935 - 3.959: 85.1896% ( 323) 00:18:34.449 3.959 - 3.982: 87.2739% ( 265) 00:18:34.449 3.982 - 4.006: 89.0750% ( 229) 00:18:34.449 4.006 - 4.030: 90.3414% ( 161) 00:18:34.449 4.030 - 4.053: 91.5526% ( 154) 00:18:34.449 4.053 - 4.077: 92.7324% ( 150) 00:18:34.449 4.077 - 4.101: 93.6055% ( 111) 00:18:34.449 4.101 - 4.124: 94.2976% ( 88) 00:18:34.449 4.124 - 4.148: 94.9269% ( 80) 00:18:34.449 4.148 - 4.172: 95.3201% ( 50) 00:18:34.449 4.172 - 4.196: 95.6977% ( 48) 00:18:34.449 4.196 - 4.219: 95.9572% ( 33) 00:18:34.449 4.219 - 4.243: 96.0831% ( 16) 00:18:34.449 4.243 - 4.267: 96.2640% ( 23) 00:18:34.449 4.267 - 4.290: 96.4370% ( 22) 00:18:34.449 4.290 - 4.314: 96.5157% ( 10) 00:18:34.449 4.314 - 4.338: 96.6572% ( 18) 00:18:34.449 4.338 - 4.361: 96.7437% ( 11) 00:18:34.449 4.361 - 4.385: 96.8067% ( 8) 00:18:34.449 4.385 - 4.409: 96.8696% ( 8) 00:18:34.449 4.409 - 4.433: 96.9168% ( 6) 00:18:34.449 4.433 - 4.456: 96.9404% ( 3) 00:18:34.449 4.480 - 4.504: 96.9718% ( 4) 00:18:34.449 4.504 - 4.527: 96.9876% ( 2) 00:18:34.449 4.527 - 4.551: 96.9954% ( 1) 00:18:34.449 4.551 - 4.575: 97.0112% ( 2) 00:18:34.449 4.599 - 4.622: 97.0348% ( 3) 00:18:34.449 4.622 - 4.646: 97.0584% ( 3) 00:18:34.449 4.646 - 4.670: 97.0662% ( 1) 00:18:34.449 4.670 - 4.693: 97.0741% ( 1) 00:18:34.449 4.693 - 4.717: 97.1213% ( 6) 00:18:34.449 4.717 - 4.741: 97.1527% ( 4) 00:18:34.449 4.741 - 4.764: 97.1842% ( 4) 00:18:34.449 4.764 - 4.788: 97.2314% ( 6) 00:18:34.449 4.788 - 4.812: 97.3101% ( 10) 00:18:34.449 4.812 - 4.836: 97.3730% ( 8) 00:18:34.449 4.836 - 4.859: 97.3887% ( 2) 00:18:34.449 4.859 - 4.883: 97.4595% ( 9) 00:18:34.449 4.883 - 4.907: 97.5067% ( 6) 00:18:34.449 4.907 - 4.930: 97.5381% ( 4) 00:18:34.449 4.930 - 4.954: 97.5696% ( 4) 00:18:34.449 4.954 - 4.978: 97.5853% ( 2) 00:18:34.450 4.978 - 5.001: 97.6089% ( 3) 00:18:34.450 5.001 - 5.025: 97.6561% ( 6) 00:18:34.450 5.025 - 5.049: 97.6719% ( 2) 00:18:34.450 5.049 - 5.073: 97.7269% ( 7) 00:18:34.450 5.073 - 5.096: 97.7662% ( 5) 00:18:34.450 5.120 - 5.144: 97.7820% ( 2) 00:18:34.450 5.144 - 5.167: 97.7898% ( 1) 00:18:34.450 5.167 - 5.191: 97.8056% ( 2) 00:18:34.450 5.262 - 5.286: 97.8213% ( 2) 00:18:34.450 5.286 - 5.310: 97.8292% ( 1) 00:18:34.450 5.310 - 5.333: 97.8370% ( 1) 00:18:34.450 5.333 - 5.357: 97.8449% ( 1) 00:18:34.450 5.357 - 5.381: 97.8528% ( 1) 00:18:34.450 5.404 - 5.428: 97.8606% ( 1) 00:18:34.450 5.428 - 5.452: 97.8685% ( 1) 00:18:34.450 5.452 - 5.476: 97.8842% ( 2) 00:18:34.450 5.476 - 5.499: 97.8921% ( 1) 00:18:34.450 5.499 - 5.523: 97.9000% ( 1) 00:18:34.450 5.594 - 5.618: 97.9078% ( 1) 00:18:34.450 5.689 - 5.713: 97.9235% ( 2) 00:18:34.450 5.784 - 5.807: 97.9314% ( 1) 00:18:34.450 5.807 - 5.831: 97.9393% ( 1) 00:18:34.450 5.831 - 5.855: 97.9471% ( 1) 00:18:34.450 6.116 - 6.163: 97.9550% ( 1) 00:18:34.450 6.210 - 6.258: 97.9629% ( 1) 00:18:34.450 6.305 - 6.353: 97.9707% ( 1) 00:18:34.450 6.542 - 6.590: 97.9786% ( 1) 00:18:34.450 6.684 - 6.732: 97.9865% ( 1) 00:18:34.450 6.874 - 6.921: 98.0022% ( 2) 00:18:34.450 6.921 - 6.969: 98.0101% ( 1) 00:18:34.450 7.111 - 7.159: 98.0258% ( 2) 00:18:34.450 7.159 - 7.206: 98.0337% ( 1) 00:18:34.450 7.206 - 7.253: 98.0415% ( 1) 00:18:34.450 7.253 - 7.301: 98.0494% ( 1) 00:18:34.450 7.301 - 7.348: 98.0573% ( 1) 00:18:34.450 7.490 - 7.538: 98.0809% ( 3) 00:18:34.450 7.538 - 7.585: 98.0887% ( 1) 00:18:34.450 7.585 - 7.633: 98.0966% ( 1) 00:18:34.450 7.633 - 7.680: 98.1123% ( 2) 00:18:34.450 7.680 - 7.727: 98.1202% ( 1) 00:18:34.450 7.775 - 7.822: 98.1438% ( 3) 00:18:34.450 7.870 - 7.917: 98.1516% ( 1) 00:18:34.450 8.012 - 8.059: 98.1674% ( 2) 00:18:34.450 8.107 - 8.154: 98.1831% ( 2) 00:18:34.450 8.201 - 8.249: 98.1988% ( 2) 00:18:34.450 8.344 - 8.391: 98.2067% ( 1) 00:18:34.450 8.391 - 8.439: 98.2146% ( 1) 00:18:34.450 8.439 - 8.486: 98.2224% ( 1) 00:18:34.450 8.533 - 8.581: 98.2382% ( 2) 00:18:34.450 8.676 - 8.723: 98.2460% ( 1) 00:18:34.450 8.723 - 8.770: 98.2696% ( 3) 00:18:34.450 8.818 - 8.865: 98.2932% ( 3) 00:18:34.450 8.913 - 8.960: 98.3090% ( 2) 00:18:34.450 8.960 - 9.007: 98.3247% ( 2) 00:18:34.450 9.007 - 9.055: 98.3325% ( 1) 00:18:34.450 9.055 - 9.102: 98.3483% ( 2) 00:18:34.450 9.102 - 9.150: 98.3640% ( 2) 00:18:34.450 9.150 - 9.197: 98.3719% ( 1) 00:18:34.450 9.197 - 9.244: 98.3797% ( 1) 00:18:34.450 9.244 - 9.292: 98.3876% ( 1) 00:18:34.450 9.292 - 9.339: 98.4033% ( 2) 00:18:34.450 9.339 - 9.387: 98.4112% ( 1) 00:18:34.450 9.387 - 9.434: 98.4191% ( 1) 00:18:34.450 9.576 - 9.624: 98.4269% ( 1) 00:18:34.450 9.624 - 9.671: 98.4348% ( 1) 00:18:34.450 9.671 - 9.719: 98.4427% ( 1) 00:18:34.450 9.813 - 9.861: 98.4505% ( 1) 00:18:34.450 9.908 - 9.956: 98.4584% ( 1) 00:18:34.450 9.956 - 10.003: 98.4663% ( 1) 00:18:34.450 10.003 - 10.050: 98.4820% ( 2) 00:18:34.450 10.050 - 10.098: 98.4899% ( 1) 00:18:34.450 10.098 - 10.145: 98.4977% ( 1) 00:18:34.450 10.193 - 10.240: 98.5056% ( 1) 00:18:34.450 10.524 - 10.572: 98.5134% ( 1) 00:18:34.450 10.572 - 10.619: 98.5292% ( 2) 00:18:34.450 10.619 - 10.667: 98.5449% ( 2) 00:18:34.450 10.667 - 10.714: 98.5528% ( 1) 00:18:34.450 10.714 - 10.761: 98.5685% ( 2) 00:18:34.450 10.951 - 10.999: 98.5842% ( 2) 00:18:34.450 11.093 - 11.141: 98.5921% ( 1) 00:18:34.450 11.236 - 11.283: 98.6000% ( 1) 00:18:34.450 11.283 - 11.330: 98.6078% ( 1) 00:18:34.450 11.615 - 11.662: 98.6157% ( 1) 00:18:34.450 11.662 - 11.710: 98.6314% ( 2) 00:18:34.450 11.710 - 11.757: 98.6393% ( 1) 00:18:34.450 11.757 - 11.804: 98.6472% ( 1) 00:18:34.450 11.804 - 11.852: 98.6550% ( 1) 00:18:34.450 11.899 - 11.947: 98.6629% ( 1) 00:18:34.450 11.994 - 12.041: 98.6708% ( 1) 00:18:34.450 12.136 - 12.231: 98.6786% ( 1) 00:18:34.450 12.231 - 12.326: 98.6865% ( 1) 00:18:34.450 12.326 - 12.421: 98.7022% ( 2) 00:18:34.450 12.421 - 12.516: 98.7101% ( 1) 00:18:34.450 12.610 - 12.705: 98.7179% ( 1) 00:18:34.450 12.705 - 12.800: 98.7337% ( 2) 00:18:34.450 12.800 - 12.895: 98.7415% ( 1) 00:18:34.450 12.990 - 13.084: 98.7573% ( 2) 00:18:34.450 13.084 - 13.179: 98.7651% ( 1) 00:18:34.450 13.179 - 13.274: 98.7730% ( 1) 00:18:34.450 13.274 - 13.369: 98.8045% ( 4) 00:18:34.450 13.369 - 13.464: 98.8123% ( 1) 00:18:34.450 13.464 - 13.559: 98.8281% ( 2) 00:18:34.450 13.938 - 14.033: 98.8438% ( 2) 00:18:34.450 14.317 - 14.412: 98.8595% ( 2) 00:18:34.450 14.412 - 14.507: 98.8674% ( 1) 00:18:34.450 14.791 - 14.886: 98.8831% ( 2) 00:18:34.450 14.981 - 15.076: 98.8910% ( 1) 00:18:34.450 15.265 - 15.360: 98.8989% ( 1) 00:18:34.450 15.455 - 15.550: 98.9067% ( 1) 00:18:34.450 16.972 - 17.067: 98.9146% ( 1) 00:18:34.450 17.067 - 17.161: 98.9224% ( 1) 00:18:34.450 17.256 - 17.351: 98.9303% ( 1) 00:18:34.450 17.351 - 17.446: 98.9618% ( 4) 00:18:34.450 17.446 - 17.541: 98.9932% ( 4) 00:18:34.450 17.541 - 17.636: 99.0168% ( 3) 00:18:34.450 17.636 - 17.730: 99.0719% ( 7) 00:18:34.450 17.730 - 17.825: 99.0876% ( 2) 00:18:34.450 17.825 - 17.920: 99.1427% ( 7) 00:18:34.450 17.920 - 18.015: 99.1820% ( 5) 00:18:34.450 18.015 - 18.110: 99.2292% ( 6) 00:18:34.450 18.110 - 18.204: 99.3000% ( 9) 00:18:34.450 18.204 - 18.299: 99.3472% ( 6) 00:18:34.450 18.299 - 18.394: 99.4416% ( 12) 00:18:34.450 18.394 - 18.489: 99.5438% ( 13) 00:18:34.450 18.489 - 18.584: 99.5989% ( 7) 00:18:34.450 18.584 - 18.679: 99.6303% ( 4) 00:18:34.450 18.679 - 18.773: 99.6618% ( 4) 00:18:34.450 18.773 - 18.868: 99.7011% ( 5) 00:18:34.450 18.868 - 18.963: 99.7168% ( 2) 00:18:34.450 18.963 - 19.058: 99.7326% ( 2) 00:18:34.450 19.058 - 19.153: 99.7404% ( 1) 00:18:34.450 19.153 - 19.247: 99.7562% ( 2) 00:18:34.450 19.342 - 19.437: 99.7640% ( 1) 00:18:34.450 19.437 - 19.532: 99.7876% ( 3) 00:18:34.450 19.532 - 19.627: 99.8034% ( 2) 00:18:34.450 19.721 - 19.816: 99.8112% ( 1) 00:18:34.450 20.196 - 20.290: 99.8191% ( 1) 00:18:34.450 22.092 - 22.187: 99.8270% ( 1) 00:18:34.450 22.187 - 22.281: 99.8348% ( 1) 00:18:34.450 22.756 - 22.850: 99.8427% ( 1) 00:18:34.450 23.230 - 23.324: 99.8506% ( 1) 00:18:34.450 23.419 - 23.514: 99.8584% ( 1) 00:18:34.450 25.031 - 25.221: 99.8663% ( 1) 00:18:34.450 27.117 - 27.307: 99.8742% ( 1) 00:18:34.450 27.307 - 27.496: 99.8820% ( 1) 00:18:34.450 27.876 - 28.065: 99.8899% ( 1) 00:18:34.450 1061.926 - 1067.994: 99.8978% ( 1) 00:18:34.450 3980.705 - 4004.978: 99.9764% ( 10) 00:18:34.450 4004.978 - 4029.250: 100.0000% ( 3) 00:18:34.450 00:18:34.450 Complete histogram 00:18:34.450 ================== 00:18:34.450 Range in us Cumulative Count 00:18:34.450 2.062 - 2.074: 7.1260% ( 906) 00:18:34.450 2.074 - 2.086: 43.5583% ( 4632) 00:18:34.450 2.086 - 2.098: 47.3808% ( 486) 00:18:34.450 2.098 - 2.110: 52.2810% ( 623) 00:18:34.450 2.110 - 2.121: 57.8339% ( 706) 00:18:34.450 2.121 - 2.133: 59.2890% ( 185) 00:18:34.450 2.133 - 2.145: 67.5476% ( 1050) 00:18:34.450 2.145 - 2.157: 77.3242% ( 1243) 00:18:34.450 2.157 - 2.169: 78.3860% ( 135) 00:18:34.450 2.169 - 2.181: 80.9659% ( 328) 00:18:34.450 2.181 - 2.193: 82.7198% ( 223) 00:18:34.450 2.193 - 2.204: 83.2311% ( 65) 00:18:34.450 2.204 - 2.216: 85.5986% ( 301) 00:18:34.450 2.216 - 2.228: 89.3896% ( 482) 00:18:34.450 2.228 - 2.240: 91.2773% ( 240) 00:18:34.450 2.240 - 2.252: 92.6774% ( 178) 00:18:34.450 2.252 - 2.264: 93.3302% ( 83) 00:18:34.450 2.264 - 2.276: 93.5268% ( 25) 00:18:34.450 2.276 - 2.287: 93.8257% ( 38) 00:18:34.450 2.287 - 2.299: 94.2426% ( 53) 00:18:34.450 2.299 - 2.311: 94.8875% ( 82) 00:18:34.450 2.311 - 2.323: 95.2415% ( 45) 00:18:34.450 2.323 - 2.335: 95.3437% ( 13) 00:18:34.450 2.335 - 2.347: 95.3594% ( 2) 00:18:34.450 2.347 - 2.359: 95.4224% ( 8) 00:18:34.450 2.359 - 2.370: 95.5954% ( 22) 00:18:34.450 2.370 - 2.382: 95.8707% ( 35) 00:18:34.450 2.382 - 2.394: 96.1617% ( 37) 00:18:34.450 2.394 - 2.406: 96.4606% ( 38) 00:18:34.450 2.406 - 2.418: 96.7044% ( 31) 00:18:34.450 2.418 - 2.430: 96.8460% ( 18) 00:18:34.450 2.430 - 2.441: 97.0190% ( 22) 00:18:34.451 2.441 - 2.453: 97.1763% ( 20) 00:18:34.451 2.453 - 2.465: 97.4044% ( 29) 00:18:34.451 2.465 - 2.477: 97.6011% ( 25) 00:18:34.451 2.477 - 2.489: 97.8056% ( 26) 00:18:34.451 2.489 - 2.501: 97.8921% ( 11) 00:18:34.451 2.501 - 2.513: 97.9550% ( 8) 00:18:34.451 2.513 - 2.524: 98.0415% ( 11) 00:18:34.451 2.524 - 2.536: 98.1045% ( 8) 00:18:34.451 2.536 - 2.548: 98.1438% ( 5) 00:18:34.451 2.548 - 2.560: 9[2024-11-18 07:52:27.125241] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:34.451 8.1831% ( 5) 00:18:34.451 2.560 - 2.572: 98.2067% ( 3) 00:18:34.451 2.572 - 2.584: 98.2303% ( 3) 00:18:34.451 2.596 - 2.607: 98.2382% ( 1) 00:18:34.451 2.607 - 2.619: 98.2618% ( 3) 00:18:34.451 2.631 - 2.643: 98.2696% ( 1) 00:18:34.451 2.643 - 2.655: 98.2775% ( 1) 00:18:34.451 2.667 - 2.679: 98.2854% ( 1) 00:18:34.451 2.679 - 2.690: 98.2932% ( 1) 00:18:34.451 2.690 - 2.702: 98.3011% ( 1) 00:18:34.451 2.750 - 2.761: 98.3090% ( 1) 00:18:34.451 3.058 - 3.081: 98.3168% ( 1) 00:18:34.451 3.461 - 3.484: 98.3247% ( 1) 00:18:34.451 3.508 - 3.532: 98.3404% ( 2) 00:18:34.451 3.532 - 3.556: 98.3483% ( 1) 00:18:34.451 3.579 - 3.603: 98.3561% ( 1) 00:18:34.451 3.650 - 3.674: 98.3797% ( 3) 00:18:34.451 3.674 - 3.698: 98.3955% ( 2) 00:18:34.451 3.698 - 3.721: 98.4112% ( 2) 00:18:34.451 3.745 - 3.769: 98.4191% ( 1) 00:18:34.451 3.769 - 3.793: 98.4269% ( 1) 00:18:34.451 3.793 - 3.816: 98.4505% ( 3) 00:18:34.451 3.887 - 3.911: 98.4584% ( 1) 00:18:34.451 3.959 - 3.982: 98.4663% ( 1) 00:18:34.451 3.982 - 4.006: 98.4741% ( 1) 00:18:34.451 4.006 - 4.030: 98.4899% ( 2) 00:18:34.451 4.077 - 4.101: 98.4977% ( 1) 00:18:34.451 4.148 - 4.172: 98.5056% ( 1) 00:18:34.451 4.670 - 4.693: 98.5134% ( 1) 00:18:34.451 5.404 - 5.428: 98.5213% ( 1) 00:18:34.451 5.476 - 5.499: 98.5292% ( 1) 00:18:34.451 5.641 - 5.665: 98.5370% ( 1) 00:18:34.451 6.068 - 6.116: 98.5449% ( 1) 00:18:34.451 6.447 - 6.495: 98.5528% ( 1) 00:18:34.451 6.637 - 6.684: 98.5606% ( 1) 00:18:34.451 6.779 - 6.827: 98.5764% ( 2) 00:18:34.451 7.064 - 7.111: 98.5842% ( 1) 00:18:34.451 7.159 - 7.206: 98.5921% ( 1) 00:18:34.451 7.206 - 7.253: 98.6000% ( 1) 00:18:34.451 7.490 - 7.538: 98.6078% ( 1) 00:18:34.451 7.870 - 7.917: 98.6157% ( 1) 00:18:34.451 7.964 - 8.012: 98.6236% ( 1) 00:18:34.451 8.296 - 8.344: 98.6314% ( 1) 00:18:34.451 8.439 - 8.486: 98.6393% ( 1) 00:18:34.451 8.865 - 8.913: 98.6472% ( 1) 00:18:34.451 10.477 - 10.524: 98.6550% ( 1) 00:18:34.451 12.089 - 12.136: 98.6629% ( 1) 00:18:34.451 13.653 - 13.748: 98.6708% ( 1) 00:18:34.451 15.360 - 15.455: 98.6865% ( 2) 00:18:34.451 15.455 - 15.550: 98.7022% ( 2) 00:18:34.451 15.550 - 15.644: 98.7179% ( 2) 00:18:34.451 15.644 - 15.739: 98.7415% ( 3) 00:18:34.451 15.739 - 15.834: 98.7573% ( 2) 00:18:34.451 15.834 - 15.929: 98.7651% ( 1) 00:18:34.451 15.929 - 16.024: 98.7730% ( 1) 00:18:34.451 16.024 - 16.119: 98.8202% ( 6) 00:18:34.451 16.119 - 16.213: 98.8359% ( 2) 00:18:34.451 16.213 - 16.308: 98.8753% ( 5) 00:18:34.451 16.308 - 16.403: 98.9067% ( 4) 00:18:34.451 16.403 - 16.498: 98.9303% ( 3) 00:18:34.451 16.498 - 16.593: 98.9932% ( 8) 00:18:34.451 16.593 - 16.687: 99.0640% ( 9) 00:18:34.451 16.687 - 16.782: 99.1427% ( 10) 00:18:34.451 16.782 - 16.877: 99.1741% ( 4) 00:18:34.451 16.877 - 16.972: 99.1977% ( 3) 00:18:34.451 16.972 - 17.067: 99.2135% ( 2) 00:18:34.451 17.067 - 17.161: 99.2607% ( 6) 00:18:34.451 17.161 - 17.256: 99.2843% ( 3) 00:18:34.451 17.256 - 17.351: 99.3000% ( 2) 00:18:34.451 17.351 - 17.446: 99.3078% ( 1) 00:18:34.451 17.446 - 17.541: 99.3236% ( 2) 00:18:34.451 17.730 - 17.825: 99.3314% ( 1) 00:18:34.451 18.015 - 18.110: 99.3393% ( 1) 00:18:34.451 18.204 - 18.299: 99.3472% ( 1) 00:18:34.451 18.489 - 18.584: 99.3629% ( 2) 00:18:34.451 18.584 - 18.679: 99.3708% ( 1) 00:18:34.451 18.773 - 18.868: 99.3786% ( 1) 00:18:34.451 19.153 - 19.247: 99.3865% ( 1) 00:18:34.451 1025.517 - 1031.585: 99.4022% ( 2) 00:18:34.451 1031.585 - 1037.653: 99.4101% ( 1) 00:18:34.451 3980.705 - 4004.978: 99.7955% ( 49) 00:18:34.451 4004.978 - 4029.250: 99.9843% ( 24) 00:18:34.451 6990.507 - 7039.052: 100.0000% ( 2) 00:18:34.451 00:18:34.451 07:52:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:34.451 07:52:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:34.451 07:52:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:34.451 07:52:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:34.451 07:52:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:34.451 [ 00:18:34.451 { 00:18:34.451 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:34.451 "subtype": "Discovery", 00:18:34.451 "listen_addresses": [], 00:18:34.451 "allow_any_host": true, 00:18:34.451 "hosts": [] 00:18:34.451 }, 00:18:34.451 { 00:18:34.451 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:34.451 "subtype": "NVMe", 00:18:34.451 "listen_addresses": [ 00:18:34.451 { 00:18:34.451 "trtype": "VFIOUSER", 00:18:34.451 "adrfam": "IPv4", 00:18:34.451 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:34.451 "trsvcid": "0" 00:18:34.451 } 00:18:34.451 ], 00:18:34.451 "allow_any_host": true, 00:18:34.451 "hosts": [], 00:18:34.451 "serial_number": "SPDK1", 00:18:34.451 "model_number": "SPDK bdev Controller", 00:18:34.451 "max_namespaces": 32, 00:18:34.451 "min_cntlid": 1, 00:18:34.451 "max_cntlid": 65519, 00:18:34.451 "namespaces": [ 00:18:34.451 { 00:18:34.451 "nsid": 1, 00:18:34.451 "bdev_name": "Malloc1", 00:18:34.451 "name": "Malloc1", 00:18:34.451 "nguid": "30DCAE0437DA406FACFFFE4097CAD7C8", 00:18:34.451 "uuid": "30dcae04-37da-406f-acff-fe4097cad7c8" 00:18:34.451 } 00:18:34.451 ] 00:18:34.451 }, 00:18:34.451 { 00:18:34.451 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:34.451 "subtype": "NVMe", 00:18:34.451 "listen_addresses": [ 00:18:34.451 { 00:18:34.451 "trtype": "VFIOUSER", 00:18:34.451 "adrfam": "IPv4", 00:18:34.451 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:34.451 "trsvcid": "0" 00:18:34.451 } 00:18:34.451 ], 00:18:34.451 "allow_any_host": true, 00:18:34.451 "hosts": [], 00:18:34.451 "serial_number": "SPDK2", 00:18:34.451 "model_number": "SPDK bdev Controller", 00:18:34.451 "max_namespaces": 32, 00:18:34.451 "min_cntlid": 1, 00:18:34.451 "max_cntlid": 65519, 00:18:34.451 "namespaces": [ 00:18:34.451 { 00:18:34.451 "nsid": 1, 00:18:34.451 "bdev_name": "Malloc2", 00:18:34.451 "name": "Malloc2", 00:18:34.451 "nguid": "B2808F0109D243B18F6DF45C51990C8D", 00:18:34.451 "uuid": "b2808f01-09d2-43b1-8f6d-f45c51990c8d" 00:18:34.451 } 00:18:34.451 ] 00:18:34.451 } 00:18:34.451 ] 00:18:34.451 07:52:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:34.451 07:52:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=724213 00:18:34.451 07:52:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:34.451 07:52:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:34.451 07:52:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:34.451 07:52:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:34.451 07:52:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:34.451 07:52:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:34.451 07:52:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:34.451 07:52:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:34.710 [2024-11-18 07:52:27.679967] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:34.711 Malloc3 00:18:34.969 07:52:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:35.227 [2024-11-18 07:52:28.092088] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:35.227 07:52:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:35.227 Asynchronous Event Request test 00:18:35.227 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:35.227 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:35.227 Registering asynchronous event callbacks... 00:18:35.227 Starting namespace attribute notice tests for all controllers... 00:18:35.227 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:35.227 aer_cb - Changed Namespace 00:18:35.227 Cleaning up... 00:18:35.490 [ 00:18:35.490 { 00:18:35.490 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:35.490 "subtype": "Discovery", 00:18:35.490 "listen_addresses": [], 00:18:35.490 "allow_any_host": true, 00:18:35.490 "hosts": [] 00:18:35.490 }, 00:18:35.490 { 00:18:35.490 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:35.490 "subtype": "NVMe", 00:18:35.490 "listen_addresses": [ 00:18:35.490 { 00:18:35.490 "trtype": "VFIOUSER", 00:18:35.490 "adrfam": "IPv4", 00:18:35.490 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:35.490 "trsvcid": "0" 00:18:35.490 } 00:18:35.490 ], 00:18:35.490 "allow_any_host": true, 00:18:35.490 "hosts": [], 00:18:35.490 "serial_number": "SPDK1", 00:18:35.490 "model_number": "SPDK bdev Controller", 00:18:35.490 "max_namespaces": 32, 00:18:35.490 "min_cntlid": 1, 00:18:35.490 "max_cntlid": 65519, 00:18:35.490 "namespaces": [ 00:18:35.490 { 00:18:35.490 "nsid": 1, 00:18:35.490 "bdev_name": "Malloc1", 00:18:35.490 "name": "Malloc1", 00:18:35.490 "nguid": "30DCAE0437DA406FACFFFE4097CAD7C8", 00:18:35.490 "uuid": "30dcae04-37da-406f-acff-fe4097cad7c8" 00:18:35.490 }, 00:18:35.490 { 00:18:35.490 "nsid": 2, 00:18:35.490 "bdev_name": "Malloc3", 00:18:35.490 "name": "Malloc3", 00:18:35.490 "nguid": "C4D7FEEF613347CDB3CA9F99E6D3E936", 00:18:35.490 "uuid": "c4d7feef-6133-47cd-b3ca-9f99e6d3e936" 00:18:35.490 } 00:18:35.490 ] 00:18:35.490 }, 00:18:35.490 { 00:18:35.490 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:35.490 "subtype": "NVMe", 00:18:35.490 "listen_addresses": [ 00:18:35.490 { 00:18:35.490 "trtype": "VFIOUSER", 00:18:35.490 "adrfam": "IPv4", 00:18:35.490 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:35.490 "trsvcid": "0" 00:18:35.490 } 00:18:35.490 ], 00:18:35.490 "allow_any_host": true, 00:18:35.490 "hosts": [], 00:18:35.490 "serial_number": "SPDK2", 00:18:35.490 "model_number": "SPDK bdev Controller", 00:18:35.490 "max_namespaces": 32, 00:18:35.490 "min_cntlid": 1, 00:18:35.490 "max_cntlid": 65519, 00:18:35.490 "namespaces": [ 00:18:35.490 { 00:18:35.490 "nsid": 1, 00:18:35.490 "bdev_name": "Malloc2", 00:18:35.490 "name": "Malloc2", 00:18:35.490 "nguid": "B2808F0109D243B18F6DF45C51990C8D", 00:18:35.490 "uuid": "b2808f01-09d2-43b1-8f6d-f45c51990c8d" 00:18:35.490 } 00:18:35.490 ] 00:18:35.490 } 00:18:35.490 ] 00:18:35.490 07:52:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 724213 00:18:35.490 07:52:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:35.490 07:52:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:35.490 07:52:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:35.490 07:52:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:35.490 [2024-11-18 07:52:28.395695] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:18:35.490 [2024-11-18 07:52:28.395734] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid724351 ] 00:18:35.490 [2024-11-18 07:52:28.445396] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:35.490 [2024-11-18 07:52:28.453745] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:35.490 [2024-11-18 07:52:28.453776] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1417223000 00:18:35.490 [2024-11-18 07:52:28.454746] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:35.490 [2024-11-18 07:52:28.455753] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:35.490 [2024-11-18 07:52:28.456756] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:35.490 [2024-11-18 07:52:28.457769] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:35.490 [2024-11-18 07:52:28.458776] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:35.490 [2024-11-18 07:52:28.459779] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:35.490 [2024-11-18 07:52:28.460791] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:35.491 [2024-11-18 07:52:28.461796] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:35.491 [2024-11-18 07:52:28.462823] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:35.491 [2024-11-18 07:52:28.462844] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1415f1b000 00:18:35.491 [2024-11-18 07:52:28.463957] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:35.491 [2024-11-18 07:52:28.478664] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:35.491 [2024-11-18 07:52:28.478704] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:35.491 [2024-11-18 07:52:28.480814] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:35.491 [2024-11-18 07:52:28.480868] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:35.491 [2024-11-18 07:52:28.480958] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:35.491 [2024-11-18 07:52:28.480982] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:35.491 [2024-11-18 07:52:28.480993] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:35.491 [2024-11-18 07:52:28.481822] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:35.491 [2024-11-18 07:52:28.481843] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:35.491 [2024-11-18 07:52:28.481871] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:35.491 [2024-11-18 07:52:28.482825] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:35.491 [2024-11-18 07:52:28.482846] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:35.491 [2024-11-18 07:52:28.482860] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:35.491 [2024-11-18 07:52:28.483831] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:35.491 [2024-11-18 07:52:28.483851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:35.491 [2024-11-18 07:52:28.484847] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:35.491 [2024-11-18 07:52:28.484867] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:35.491 [2024-11-18 07:52:28.484876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:35.491 [2024-11-18 07:52:28.484888] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:35.491 [2024-11-18 07:52:28.484997] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:35.491 [2024-11-18 07:52:28.485005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:35.491 [2024-11-18 07:52:28.485013] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:35.491 [2024-11-18 07:52:28.485872] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:35.491 [2024-11-18 07:52:28.486863] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:35.491 [2024-11-18 07:52:28.487874] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:35.491 [2024-11-18 07:52:28.488870] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:35.491 [2024-11-18 07:52:28.488952] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:35.491 [2024-11-18 07:52:28.489890] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:35.491 [2024-11-18 07:52:28.489910] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:35.491 [2024-11-18 07:52:28.489919] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:35.491 [2024-11-18 07:52:28.489942] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:35.491 [2024-11-18 07:52:28.489956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:35.491 [2024-11-18 07:52:28.489979] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:35.491 [2024-11-18 07:52:28.489989] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:35.491 [2024-11-18 07:52:28.489996] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:35.491 [2024-11-18 07:52:28.490015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:35.491 [2024-11-18 07:52:28.500521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:35.491 [2024-11-18 07:52:28.500545] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:35.491 [2024-11-18 07:52:28.500555] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:35.491 [2024-11-18 07:52:28.500562] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:35.491 [2024-11-18 07:52:28.500570] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:35.491 [2024-11-18 07:52:28.500586] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:35.491 [2024-11-18 07:52:28.500595] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:35.491 [2024-11-18 07:52:28.500604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:35.491 [2024-11-18 07:52:28.500620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:35.491 [2024-11-18 07:52:28.500637] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:35.491 [2024-11-18 07:52:28.508506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:35.491 [2024-11-18 07:52:28.508530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.491 [2024-11-18 07:52:28.508543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.491 [2024-11-18 07:52:28.508555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.491 [2024-11-18 07:52:28.508567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.491 [2024-11-18 07:52:28.508576] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:35.491 [2024-11-18 07:52:28.508588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:35.491 [2024-11-18 07:52:28.508602] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:35.491 [2024-11-18 07:52:28.516517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:35.491 [2024-11-18 07:52:28.516541] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:35.491 [2024-11-18 07:52:28.516552] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:35.491 [2024-11-18 07:52:28.516564] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:35.491 [2024-11-18 07:52:28.516574] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:35.491 [2024-11-18 07:52:28.516588] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:35.491 [2024-11-18 07:52:28.524504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:35.491 [2024-11-18 07:52:28.524581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:35.491 [2024-11-18 07:52:28.524598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:35.491 [2024-11-18 07:52:28.524611] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:35.491 [2024-11-18 07:52:28.524620] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:35.491 [2024-11-18 07:52:28.524629] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:35.491 [2024-11-18 07:52:28.524639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:35.491 [2024-11-18 07:52:28.532516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:35.491 [2024-11-18 07:52:28.532540] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:35.491 [2024-11-18 07:52:28.532556] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:35.491 [2024-11-18 07:52:28.532572] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:35.491 [2024-11-18 07:52:28.532585] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:35.491 [2024-11-18 07:52:28.532594] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:35.491 [2024-11-18 07:52:28.532600] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:35.491 [2024-11-18 07:52:28.532609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:35.492 [2024-11-18 07:52:28.540514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:35.492 [2024-11-18 07:52:28.540547] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:35.492 [2024-11-18 07:52:28.540564] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:35.492 [2024-11-18 07:52:28.540577] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:35.492 [2024-11-18 07:52:28.540586] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:35.492 [2024-11-18 07:52:28.540592] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:35.492 [2024-11-18 07:52:28.540601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:35.492 [2024-11-18 07:52:28.548499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:35.492 [2024-11-18 07:52:28.548521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:35.492 [2024-11-18 07:52:28.548534] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:35.492 [2024-11-18 07:52:28.548550] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:35.492 [2024-11-18 07:52:28.548561] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:35.492 [2024-11-18 07:52:28.548570] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:35.492 [2024-11-18 07:52:28.548578] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:35.492 [2024-11-18 07:52:28.548587] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:35.492 [2024-11-18 07:52:28.548595] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:35.492 [2024-11-18 07:52:28.548607] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:35.492 [2024-11-18 07:52:28.548633] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:35.492 [2024-11-18 07:52:28.556503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:35.492 [2024-11-18 07:52:28.556543] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:35.492 [2024-11-18 07:52:28.564516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:35.492 [2024-11-18 07:52:28.564553] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:35.492 [2024-11-18 07:52:28.572516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:35.492 [2024-11-18 07:52:28.572548] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:35.795 [2024-11-18 07:52:28.580504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:35.795 [2024-11-18 07:52:28.580540] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:35.795 [2024-11-18 07:52:28.580552] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:35.795 [2024-11-18 07:52:28.580559] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:35.795 [2024-11-18 07:52:28.580565] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:35.795 [2024-11-18 07:52:28.580571] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:35.795 [2024-11-18 07:52:28.580581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:35.795 [2024-11-18 07:52:28.580594] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:35.795 [2024-11-18 07:52:28.580603] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:35.795 [2024-11-18 07:52:28.580609] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:35.795 [2024-11-18 07:52:28.580618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:35.795 [2024-11-18 07:52:28.580630] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:35.795 [2024-11-18 07:52:28.580638] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:35.795 [2024-11-18 07:52:28.580644] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:35.795 [2024-11-18 07:52:28.580653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:35.795 [2024-11-18 07:52:28.580666] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:35.795 [2024-11-18 07:52:28.580674] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:35.795 [2024-11-18 07:52:28.580681] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:35.795 [2024-11-18 07:52:28.580690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:35.796 [2024-11-18 07:52:28.588507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:35.796 [2024-11-18 07:52:28.588541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:35.796 [2024-11-18 07:52:28.588561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:35.796 [2024-11-18 07:52:28.588574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:35.796 ===================================================== 00:18:35.796 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:35.796 ===================================================== 00:18:35.796 Controller Capabilities/Features 00:18:35.796 ================================ 00:18:35.796 Vendor ID: 4e58 00:18:35.796 Subsystem Vendor ID: 4e58 00:18:35.796 Serial Number: SPDK2 00:18:35.796 Model Number: SPDK bdev Controller 00:18:35.796 Firmware Version: 25.01 00:18:35.796 Recommended Arb Burst: 6 00:18:35.796 IEEE OUI Identifier: 8d 6b 50 00:18:35.796 Multi-path I/O 00:18:35.796 May have multiple subsystem ports: Yes 00:18:35.796 May have multiple controllers: Yes 00:18:35.796 Associated with SR-IOV VF: No 00:18:35.796 Max Data Transfer Size: 131072 00:18:35.796 Max Number of Namespaces: 32 00:18:35.796 Max Number of I/O Queues: 127 00:18:35.796 NVMe Specification Version (VS): 1.3 00:18:35.796 NVMe Specification Version (Identify): 1.3 00:18:35.796 Maximum Queue Entries: 256 00:18:35.796 Contiguous Queues Required: Yes 00:18:35.796 Arbitration Mechanisms Supported 00:18:35.796 Weighted Round Robin: Not Supported 00:18:35.796 Vendor Specific: Not Supported 00:18:35.796 Reset Timeout: 15000 ms 00:18:35.796 Doorbell Stride: 4 bytes 00:18:35.796 NVM Subsystem Reset: Not Supported 00:18:35.796 Command Sets Supported 00:18:35.796 NVM Command Set: Supported 00:18:35.796 Boot Partition: Not Supported 00:18:35.796 Memory Page Size Minimum: 4096 bytes 00:18:35.796 Memory Page Size Maximum: 4096 bytes 00:18:35.796 Persistent Memory Region: Not Supported 00:18:35.796 Optional Asynchronous Events Supported 00:18:35.796 Namespace Attribute Notices: Supported 00:18:35.796 Firmware Activation Notices: Not Supported 00:18:35.796 ANA Change Notices: Not Supported 00:18:35.796 PLE Aggregate Log Change Notices: Not Supported 00:18:35.796 LBA Status Info Alert Notices: Not Supported 00:18:35.796 EGE Aggregate Log Change Notices: Not Supported 00:18:35.796 Normal NVM Subsystem Shutdown event: Not Supported 00:18:35.796 Zone Descriptor Change Notices: Not Supported 00:18:35.796 Discovery Log Change Notices: Not Supported 00:18:35.796 Controller Attributes 00:18:35.796 128-bit Host Identifier: Supported 00:18:35.796 Non-Operational Permissive Mode: Not Supported 00:18:35.796 NVM Sets: Not Supported 00:18:35.796 Read Recovery Levels: Not Supported 00:18:35.796 Endurance Groups: Not Supported 00:18:35.796 Predictable Latency Mode: Not Supported 00:18:35.796 Traffic Based Keep ALive: Not Supported 00:18:35.796 Namespace Granularity: Not Supported 00:18:35.796 SQ Associations: Not Supported 00:18:35.796 UUID List: Not Supported 00:18:35.796 Multi-Domain Subsystem: Not Supported 00:18:35.796 Fixed Capacity Management: Not Supported 00:18:35.796 Variable Capacity Management: Not Supported 00:18:35.796 Delete Endurance Group: Not Supported 00:18:35.796 Delete NVM Set: Not Supported 00:18:35.796 Extended LBA Formats Supported: Not Supported 00:18:35.796 Flexible Data Placement Supported: Not Supported 00:18:35.796 00:18:35.796 Controller Memory Buffer Support 00:18:35.796 ================================ 00:18:35.796 Supported: No 00:18:35.796 00:18:35.796 Persistent Memory Region Support 00:18:35.796 ================================ 00:18:35.796 Supported: No 00:18:35.796 00:18:35.796 Admin Command Set Attributes 00:18:35.796 ============================ 00:18:35.796 Security Send/Receive: Not Supported 00:18:35.796 Format NVM: Not Supported 00:18:35.796 Firmware Activate/Download: Not Supported 00:18:35.796 Namespace Management: Not Supported 00:18:35.796 Device Self-Test: Not Supported 00:18:35.796 Directives: Not Supported 00:18:35.796 NVMe-MI: Not Supported 00:18:35.796 Virtualization Management: Not Supported 00:18:35.796 Doorbell Buffer Config: Not Supported 00:18:35.796 Get LBA Status Capability: Not Supported 00:18:35.796 Command & Feature Lockdown Capability: Not Supported 00:18:35.796 Abort Command Limit: 4 00:18:35.796 Async Event Request Limit: 4 00:18:35.796 Number of Firmware Slots: N/A 00:18:35.796 Firmware Slot 1 Read-Only: N/A 00:18:35.796 Firmware Activation Without Reset: N/A 00:18:35.796 Multiple Update Detection Support: N/A 00:18:35.796 Firmware Update Granularity: No Information Provided 00:18:35.796 Per-Namespace SMART Log: No 00:18:35.796 Asymmetric Namespace Access Log Page: Not Supported 00:18:35.796 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:35.796 Command Effects Log Page: Supported 00:18:35.796 Get Log Page Extended Data: Supported 00:18:35.796 Telemetry Log Pages: Not Supported 00:18:35.796 Persistent Event Log Pages: Not Supported 00:18:35.796 Supported Log Pages Log Page: May Support 00:18:35.796 Commands Supported & Effects Log Page: Not Supported 00:18:35.796 Feature Identifiers & Effects Log Page:May Support 00:18:35.796 NVMe-MI Commands & Effects Log Page: May Support 00:18:35.796 Data Area 4 for Telemetry Log: Not Supported 00:18:35.796 Error Log Page Entries Supported: 128 00:18:35.796 Keep Alive: Supported 00:18:35.796 Keep Alive Granularity: 10000 ms 00:18:35.796 00:18:35.796 NVM Command Set Attributes 00:18:35.796 ========================== 00:18:35.796 Submission Queue Entry Size 00:18:35.796 Max: 64 00:18:35.796 Min: 64 00:18:35.796 Completion Queue Entry Size 00:18:35.796 Max: 16 00:18:35.796 Min: 16 00:18:35.796 Number of Namespaces: 32 00:18:35.796 Compare Command: Supported 00:18:35.796 Write Uncorrectable Command: Not Supported 00:18:35.796 Dataset Management Command: Supported 00:18:35.796 Write Zeroes Command: Supported 00:18:35.796 Set Features Save Field: Not Supported 00:18:35.796 Reservations: Not Supported 00:18:35.796 Timestamp: Not Supported 00:18:35.796 Copy: Supported 00:18:35.796 Volatile Write Cache: Present 00:18:35.796 Atomic Write Unit (Normal): 1 00:18:35.796 Atomic Write Unit (PFail): 1 00:18:35.796 Atomic Compare & Write Unit: 1 00:18:35.796 Fused Compare & Write: Supported 00:18:35.796 Scatter-Gather List 00:18:35.796 SGL Command Set: Supported (Dword aligned) 00:18:35.796 SGL Keyed: Not Supported 00:18:35.796 SGL Bit Bucket Descriptor: Not Supported 00:18:35.796 SGL Metadata Pointer: Not Supported 00:18:35.796 Oversized SGL: Not Supported 00:18:35.796 SGL Metadata Address: Not Supported 00:18:35.796 SGL Offset: Not Supported 00:18:35.796 Transport SGL Data Block: Not Supported 00:18:35.796 Replay Protected Memory Block: Not Supported 00:18:35.796 00:18:35.796 Firmware Slot Information 00:18:35.796 ========================= 00:18:35.796 Active slot: 1 00:18:35.796 Slot 1 Firmware Revision: 25.01 00:18:35.796 00:18:35.796 00:18:35.796 Commands Supported and Effects 00:18:35.796 ============================== 00:18:35.796 Admin Commands 00:18:35.796 -------------- 00:18:35.796 Get Log Page (02h): Supported 00:18:35.796 Identify (06h): Supported 00:18:35.796 Abort (08h): Supported 00:18:35.796 Set Features (09h): Supported 00:18:35.796 Get Features (0Ah): Supported 00:18:35.796 Asynchronous Event Request (0Ch): Supported 00:18:35.796 Keep Alive (18h): Supported 00:18:35.796 I/O Commands 00:18:35.796 ------------ 00:18:35.796 Flush (00h): Supported LBA-Change 00:18:35.796 Write (01h): Supported LBA-Change 00:18:35.796 Read (02h): Supported 00:18:35.796 Compare (05h): Supported 00:18:35.796 Write Zeroes (08h): Supported LBA-Change 00:18:35.796 Dataset Management (09h): Supported LBA-Change 00:18:35.796 Copy (19h): Supported LBA-Change 00:18:35.796 00:18:35.796 Error Log 00:18:35.796 ========= 00:18:35.796 00:18:35.796 Arbitration 00:18:35.796 =========== 00:18:35.796 Arbitration Burst: 1 00:18:35.796 00:18:35.796 Power Management 00:18:35.796 ================ 00:18:35.796 Number of Power States: 1 00:18:35.796 Current Power State: Power State #0 00:18:35.796 Power State #0: 00:18:35.796 Max Power: 0.00 W 00:18:35.796 Non-Operational State: Operational 00:18:35.796 Entry Latency: Not Reported 00:18:35.796 Exit Latency: Not Reported 00:18:35.796 Relative Read Throughput: 0 00:18:35.796 Relative Read Latency: 0 00:18:35.796 Relative Write Throughput: 0 00:18:35.796 Relative Write Latency: 0 00:18:35.796 Idle Power: Not Reported 00:18:35.796 Active Power: Not Reported 00:18:35.796 Non-Operational Permissive Mode: Not Supported 00:18:35.796 00:18:35.797 Health Information 00:18:35.797 ================== 00:18:35.797 Critical Warnings: 00:18:35.797 Available Spare Space: OK 00:18:35.797 Temperature: OK 00:18:35.797 Device Reliability: OK 00:18:35.797 Read Only: No 00:18:35.797 Volatile Memory Backup: OK 00:18:35.797 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:35.797 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:35.797 Available Spare: 0% 00:18:35.797 Available Sp[2024-11-18 07:52:28.588699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:35.797 [2024-11-18 07:52:28.596516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:35.797 [2024-11-18 07:52:28.596569] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:35.797 [2024-11-18 07:52:28.596590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.797 [2024-11-18 07:52:28.596601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.797 [2024-11-18 07:52:28.596612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.797 [2024-11-18 07:52:28.596622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.797 [2024-11-18 07:52:28.596711] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:35.797 [2024-11-18 07:52:28.596735] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:35.797 [2024-11-18 07:52:28.597709] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:35.797 [2024-11-18 07:52:28.597786] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:35.797 [2024-11-18 07:52:28.597801] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:35.797 [2024-11-18 07:52:28.598725] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:35.797 [2024-11-18 07:52:28.598750] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:35.797 [2024-11-18 07:52:28.598820] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:35.797 [2024-11-18 07:52:28.600036] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:35.797 are Threshold: 0% 00:18:35.797 Life Percentage Used: 0% 00:18:35.797 Data Units Read: 0 00:18:35.797 Data Units Written: 0 00:18:35.797 Host Read Commands: 0 00:18:35.797 Host Write Commands: 0 00:18:35.797 Controller Busy Time: 0 minutes 00:18:35.797 Power Cycles: 0 00:18:35.797 Power On Hours: 0 hours 00:18:35.797 Unsafe Shutdowns: 0 00:18:35.797 Unrecoverable Media Errors: 0 00:18:35.797 Lifetime Error Log Entries: 0 00:18:35.797 Warning Temperature Time: 0 minutes 00:18:35.797 Critical Temperature Time: 0 minutes 00:18:35.797 00:18:35.797 Number of Queues 00:18:35.797 ================ 00:18:35.797 Number of I/O Submission Queues: 127 00:18:35.797 Number of I/O Completion Queues: 127 00:18:35.797 00:18:35.797 Active Namespaces 00:18:35.797 ================= 00:18:35.797 Namespace ID:1 00:18:35.797 Error Recovery Timeout: Unlimited 00:18:35.797 Command Set Identifier: NVM (00h) 00:18:35.797 Deallocate: Supported 00:18:35.797 Deallocated/Unwritten Error: Not Supported 00:18:35.797 Deallocated Read Value: Unknown 00:18:35.797 Deallocate in Write Zeroes: Not Supported 00:18:35.797 Deallocated Guard Field: 0xFFFF 00:18:35.797 Flush: Supported 00:18:35.797 Reservation: Supported 00:18:35.797 Namespace Sharing Capabilities: Multiple Controllers 00:18:35.797 Size (in LBAs): 131072 (0GiB) 00:18:35.797 Capacity (in LBAs): 131072 (0GiB) 00:18:35.797 Utilization (in LBAs): 131072 (0GiB) 00:18:35.797 NGUID: B2808F0109D243B18F6DF45C51990C8D 00:18:35.797 UUID: b2808f01-09d2-43b1-8f6d-f45c51990c8d 00:18:35.797 Thin Provisioning: Not Supported 00:18:35.797 Per-NS Atomic Units: Yes 00:18:35.797 Atomic Boundary Size (Normal): 0 00:18:35.797 Atomic Boundary Size (PFail): 0 00:18:35.797 Atomic Boundary Offset: 0 00:18:35.797 Maximum Single Source Range Length: 65535 00:18:35.797 Maximum Copy Length: 65535 00:18:35.797 Maximum Source Range Count: 1 00:18:35.797 NGUID/EUI64 Never Reused: No 00:18:35.797 Namespace Write Protected: No 00:18:35.797 Number of LBA Formats: 1 00:18:35.797 Current LBA Format: LBA Format #00 00:18:35.797 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:35.797 00:18:35.797 07:52:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:35.797 [2024-11-18 07:52:28.845341] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:41.095 Initializing NVMe Controllers 00:18:41.095 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:41.095 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:41.095 Initialization complete. Launching workers. 00:18:41.095 ======================================================== 00:18:41.095 Latency(us) 00:18:41.095 Device Information : IOPS MiB/s Average min max 00:18:41.095 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33257.60 129.91 3848.15 1158.12 9635.04 00:18:41.095 ======================================================== 00:18:41.095 Total : 33257.60 129.91 3848.15 1158.12 9635.04 00:18:41.095 00:18:41.095 [2024-11-18 07:52:33.952883] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:41.095 07:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:41.354 [2024-11-18 07:52:34.215587] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:46.630 Initializing NVMe Controllers 00:18:46.630 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:46.630 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:46.630 Initialization complete. Launching workers. 00:18:46.630 ======================================================== 00:18:46.630 Latency(us) 00:18:46.630 Device Information : IOPS MiB/s Average min max 00:18:46.630 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30658.51 119.76 4174.10 1226.03 9722.11 00:18:46.631 ======================================================== 00:18:46.631 Total : 30658.51 119.76 4174.10 1226.03 9722.11 00:18:46.631 00:18:46.631 [2024-11-18 07:52:39.237835] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:46.631 07:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:46.631 [2024-11-18 07:52:39.464988] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:51.908 [2024-11-18 07:52:44.594638] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:51.908 Initializing NVMe Controllers 00:18:51.908 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:51.908 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:51.908 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:51.908 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:51.908 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:51.908 Initialization complete. Launching workers. 00:18:51.908 Starting thread on core 2 00:18:51.908 Starting thread on core 3 00:18:51.908 Starting thread on core 1 00:18:51.908 07:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:51.908 [2024-11-18 07:52:44.925065] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:55.204 [2024-11-18 07:52:48.009734] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:55.204 Initializing NVMe Controllers 00:18:55.204 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:55.204 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:55.204 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:55.204 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:55.204 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:55.204 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:55.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:55.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:55.204 Initialization complete. Launching workers. 00:18:55.204 Starting thread on core 1 with urgent priority queue 00:18:55.204 Starting thread on core 2 with urgent priority queue 00:18:55.204 Starting thread on core 3 with urgent priority queue 00:18:55.204 Starting thread on core 0 with urgent priority queue 00:18:55.204 SPDK bdev Controller (SPDK2 ) core 0: 4872.00 IO/s 20.53 secs/100000 ios 00:18:55.204 SPDK bdev Controller (SPDK2 ) core 1: 5540.00 IO/s 18.05 secs/100000 ios 00:18:55.204 SPDK bdev Controller (SPDK2 ) core 2: 5683.33 IO/s 17.60 secs/100000 ios 00:18:55.204 SPDK bdev Controller (SPDK2 ) core 3: 6069.33 IO/s 16.48 secs/100000 ios 00:18:55.204 ======================================================== 00:18:55.204 00:18:55.204 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:55.462 [2024-11-18 07:52:48.323769] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:55.462 Initializing NVMe Controllers 00:18:55.462 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:55.462 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:55.463 Namespace ID: 1 size: 0GB 00:18:55.463 Initialization complete. 00:18:55.463 INFO: using host memory buffer for IO 00:18:55.463 Hello world! 00:18:55.463 [2024-11-18 07:52:48.335867] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:55.463 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:55.721 [2024-11-18 07:52:48.648302] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:56.671 Initializing NVMe Controllers 00:18:56.671 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:56.671 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:56.671 Initialization complete. Launching workers. 00:18:56.671 submit (in ns) avg, min, max = 7713.4, 3494.4, 4024353.3 00:18:56.671 complete (in ns) avg, min, max = 27311.3, 2064.4, 8004852.2 00:18:56.671 00:18:56.671 Submit histogram 00:18:56.671 ================ 00:18:56.671 Range in us Cumulative Count 00:18:56.671 3.484 - 3.508: 0.1644% ( 21) 00:18:56.671 3.508 - 3.532: 0.7436% ( 74) 00:18:56.671 3.532 - 3.556: 2.3795% ( 209) 00:18:56.671 3.556 - 3.579: 6.1052% ( 476) 00:18:56.671 3.579 - 3.603: 12.2260% ( 782) 00:18:56.671 3.603 - 3.627: 21.3134% ( 1161) 00:18:56.671 3.627 - 3.650: 30.0798% ( 1120) 00:18:56.671 3.650 - 3.674: 37.8914% ( 998) 00:18:56.671 3.674 - 3.698: 43.9418% ( 773) 00:18:56.671 3.698 - 3.721: 50.1487% ( 793) 00:18:56.671 3.721 - 3.745: 55.6825% ( 707) 00:18:56.671 3.745 - 3.769: 60.4415% ( 608) 00:18:56.671 3.769 - 3.793: 63.7210% ( 419) 00:18:56.671 3.793 - 3.816: 67.0632% ( 427) 00:18:56.671 3.816 - 3.840: 70.4211% ( 429) 00:18:56.671 3.840 - 3.864: 74.7495% ( 553) 00:18:56.671 3.864 - 3.887: 79.0466% ( 549) 00:18:56.671 3.887 - 3.911: 82.3262% ( 419) 00:18:56.671 3.911 - 3.935: 85.1440% ( 360) 00:18:56.671 3.935 - 3.959: 87.0852% ( 248) 00:18:56.671 3.959 - 3.982: 88.7680% ( 215) 00:18:56.671 3.982 - 4.006: 90.3961% ( 208) 00:18:56.671 4.006 - 4.030: 91.6719% ( 163) 00:18:56.671 4.030 - 4.053: 92.7520% ( 138) 00:18:56.671 4.053 - 4.077: 93.5817% ( 106) 00:18:56.671 4.077 - 4.101: 94.4192% ( 107) 00:18:56.671 4.101 - 4.124: 95.0767% ( 84) 00:18:56.671 4.124 - 4.148: 95.5620% ( 62) 00:18:56.671 4.148 - 4.172: 95.8986% ( 43) 00:18:56.671 4.172 - 4.196: 96.0629% ( 21) 00:18:56.671 4.196 - 4.219: 96.2743% ( 27) 00:18:56.671 4.219 - 4.243: 96.4621% ( 24) 00:18:56.671 4.243 - 4.267: 96.5717% ( 14) 00:18:56.671 4.267 - 4.290: 96.7282% ( 20) 00:18:56.671 4.290 - 4.314: 96.8300% ( 13) 00:18:56.671 4.314 - 4.338: 96.9004% ( 9) 00:18:56.671 4.338 - 4.361: 96.9787% ( 10) 00:18:56.671 4.361 - 4.385: 97.0100% ( 4) 00:18:56.671 4.385 - 4.409: 97.0413% ( 4) 00:18:56.671 4.409 - 4.433: 97.1118% ( 9) 00:18:56.672 4.433 - 4.456: 97.1274% ( 2) 00:18:56.672 4.456 - 4.480: 97.1587% ( 4) 00:18:56.672 4.480 - 4.504: 97.1979% ( 5) 00:18:56.672 4.527 - 4.551: 97.2135% ( 2) 00:18:56.672 4.551 - 4.575: 97.2292% ( 2) 00:18:56.672 4.599 - 4.622: 97.2527% ( 3) 00:18:56.672 4.622 - 4.646: 97.2918% ( 5) 00:18:56.672 4.646 - 4.670: 97.3075% ( 2) 00:18:56.672 4.670 - 4.693: 97.3622% ( 7) 00:18:56.672 4.693 - 4.717: 97.4170% ( 7) 00:18:56.672 4.717 - 4.741: 97.4718% ( 7) 00:18:56.672 4.741 - 4.764: 97.5501% ( 10) 00:18:56.672 4.764 - 4.788: 97.5971% ( 6) 00:18:56.672 4.788 - 4.812: 97.6518% ( 7) 00:18:56.672 4.812 - 4.836: 97.6753% ( 3) 00:18:56.672 4.836 - 4.859: 97.7066% ( 4) 00:18:56.672 4.859 - 4.883: 97.7536% ( 6) 00:18:56.672 4.883 - 4.907: 97.7771% ( 3) 00:18:56.672 4.907 - 4.930: 97.8162% ( 5) 00:18:56.672 4.930 - 4.954: 97.8397% ( 3) 00:18:56.672 4.954 - 4.978: 97.9023% ( 8) 00:18:56.672 4.978 - 5.001: 97.9336% ( 4) 00:18:56.672 5.001 - 5.025: 97.9571% ( 3) 00:18:56.672 5.025 - 5.049: 97.9649% ( 1) 00:18:56.672 5.049 - 5.073: 98.0119% ( 6) 00:18:56.672 5.073 - 5.096: 98.0197% ( 1) 00:18:56.672 5.096 - 5.120: 98.0276% ( 1) 00:18:56.672 5.120 - 5.144: 98.0354% ( 1) 00:18:56.672 5.144 - 5.167: 98.0667% ( 4) 00:18:56.672 5.191 - 5.215: 98.0745% ( 1) 00:18:56.672 5.215 - 5.239: 98.0823% ( 1) 00:18:56.672 5.262 - 5.286: 98.0902% ( 1) 00:18:56.672 5.357 - 5.381: 98.0980% ( 1) 00:18:56.672 5.476 - 5.499: 98.1058% ( 1) 00:18:56.672 5.713 - 5.736: 98.1137% ( 1) 00:18:56.672 5.760 - 5.784: 98.1293% ( 2) 00:18:56.672 5.855 - 5.879: 98.1371% ( 1) 00:18:56.672 6.353 - 6.400: 98.1450% ( 1) 00:18:56.672 6.779 - 6.827: 98.1606% ( 2) 00:18:56.672 6.827 - 6.874: 98.1684% ( 1) 00:18:56.672 6.874 - 6.921: 98.1919% ( 3) 00:18:56.672 6.921 - 6.969: 98.1997% ( 1) 00:18:56.672 6.969 - 7.016: 98.2076% ( 1) 00:18:56.672 7.064 - 7.111: 98.2232% ( 2) 00:18:56.672 7.253 - 7.301: 98.2311% ( 1) 00:18:56.672 7.443 - 7.490: 98.2389% ( 1) 00:18:56.672 7.490 - 7.538: 98.2702% ( 4) 00:18:56.672 7.538 - 7.585: 98.2780% ( 1) 00:18:56.672 7.727 - 7.775: 98.2937% ( 2) 00:18:56.672 7.870 - 7.917: 98.3015% ( 1) 00:18:56.672 7.917 - 7.964: 98.3250% ( 3) 00:18:56.672 7.964 - 8.012: 98.3328% ( 1) 00:18:56.672 8.012 - 8.059: 98.3406% ( 1) 00:18:56.672 8.107 - 8.154: 98.3563% ( 2) 00:18:56.672 8.249 - 8.296: 98.3641% ( 1) 00:18:56.672 8.344 - 8.391: 98.3719% ( 1) 00:18:56.672 8.391 - 8.439: 98.3798% ( 1) 00:18:56.672 8.628 - 8.676: 98.3876% ( 1) 00:18:56.672 8.676 - 8.723: 98.3954% ( 1) 00:18:56.672 8.723 - 8.770: 98.4189% ( 3) 00:18:56.672 8.913 - 8.960: 98.4424% ( 3) 00:18:56.672 8.960 - 9.007: 98.4502% ( 1) 00:18:56.672 9.007 - 9.055: 98.4580% ( 1) 00:18:56.672 9.055 - 9.102: 98.4659% ( 1) 00:18:56.672 9.102 - 9.150: 98.4737% ( 1) 00:18:56.672 9.150 - 9.197: 98.4815% ( 1) 00:18:56.672 9.244 - 9.292: 98.4972% ( 2) 00:18:56.672 9.292 - 9.339: 98.5050% ( 1) 00:18:56.672 9.339 - 9.387: 98.5128% ( 1) 00:18:56.672 9.387 - 9.434: 98.5207% ( 1) 00:18:56.672 9.434 - 9.481: 98.5285% ( 1) 00:18:56.672 9.481 - 9.529: 98.5441% ( 2) 00:18:56.672 9.529 - 9.576: 98.5598% ( 2) 00:18:56.672 9.576 - 9.624: 98.5676% ( 1) 00:18:56.672 9.671 - 9.719: 98.5755% ( 1) 00:18:56.672 9.861 - 9.908: 98.5989% ( 3) 00:18:56.672 9.956 - 10.003: 98.6146% ( 2) 00:18:56.672 10.003 - 10.050: 98.6302% ( 2) 00:18:56.672 10.098 - 10.145: 98.6381% ( 1) 00:18:56.672 10.240 - 10.287: 98.6459% ( 1) 00:18:56.672 10.382 - 10.430: 98.6616% ( 2) 00:18:56.672 10.524 - 10.572: 98.6694% ( 1) 00:18:56.672 10.572 - 10.619: 98.6772% ( 1) 00:18:56.672 10.619 - 10.667: 98.6850% ( 1) 00:18:56.672 10.761 - 10.809: 98.6929% ( 1) 00:18:56.672 10.856 - 10.904: 98.7007% ( 1) 00:18:56.672 10.904 - 10.951: 98.7085% ( 1) 00:18:56.672 10.999 - 11.046: 98.7163% ( 1) 00:18:56.672 11.046 - 11.093: 98.7242% ( 1) 00:18:56.672 11.093 - 11.141: 98.7320% ( 1) 00:18:56.672 11.188 - 11.236: 98.7398% ( 1) 00:18:56.672 11.473 - 11.520: 98.7477% ( 1) 00:18:56.672 11.615 - 11.662: 98.7633% ( 2) 00:18:56.672 11.662 - 11.710: 98.7711% ( 1) 00:18:56.672 11.804 - 11.852: 98.7790% ( 1) 00:18:56.672 11.852 - 11.899: 98.7868% ( 1) 00:18:56.672 11.947 - 11.994: 98.7946% ( 1) 00:18:56.672 11.994 - 12.041: 98.8024% ( 1) 00:18:56.672 12.041 - 12.089: 98.8103% ( 1) 00:18:56.672 12.610 - 12.705: 98.8259% ( 2) 00:18:56.672 12.800 - 12.895: 98.8416% ( 2) 00:18:56.672 13.274 - 13.369: 98.8572% ( 2) 00:18:56.672 13.369 - 13.464: 98.8651% ( 1) 00:18:56.672 13.464 - 13.559: 98.8729% ( 1) 00:18:56.672 13.559 - 13.653: 98.8807% ( 1) 00:18:56.672 13.653 - 13.748: 98.8885% ( 1) 00:18:56.672 13.748 - 13.843: 98.9042% ( 2) 00:18:56.672 13.938 - 14.033: 98.9120% ( 1) 00:18:56.672 14.317 - 14.412: 98.9198% ( 1) 00:18:56.672 14.791 - 14.886: 98.9277% ( 1) 00:18:56.672 15.739 - 15.834: 98.9355% ( 1) 00:18:56.672 16.972 - 17.067: 98.9433% ( 1) 00:18:56.672 17.067 - 17.161: 98.9512% ( 1) 00:18:56.672 17.161 - 17.256: 98.9590% ( 1) 00:18:56.672 17.351 - 17.446: 98.9903% ( 4) 00:18:56.672 17.446 - 17.541: 99.0216% ( 4) 00:18:56.672 17.541 - 17.636: 99.0842% ( 8) 00:18:56.672 17.636 - 17.730: 99.1390% ( 7) 00:18:56.672 17.730 - 17.825: 99.1938% ( 7) 00:18:56.672 17.825 - 17.920: 99.2721% ( 10) 00:18:56.672 17.920 - 18.015: 99.3582% ( 11) 00:18:56.672 18.015 - 18.110: 99.4051% ( 6) 00:18:56.672 18.110 - 18.204: 99.4521% ( 6) 00:18:56.672 18.204 - 18.299: 99.5069% ( 7) 00:18:56.672 18.299 - 18.394: 99.5852% ( 10) 00:18:56.672 18.394 - 18.489: 99.6713% ( 11) 00:18:56.672 18.489 - 18.584: 99.6947% ( 3) 00:18:56.672 18.584 - 18.679: 99.7026% ( 1) 00:18:56.672 18.679 - 18.773: 99.7339% ( 4) 00:18:56.672 18.773 - 18.868: 99.7495% ( 2) 00:18:56.672 18.868 - 18.963: 99.7887% ( 5) 00:18:56.672 18.963 - 19.058: 99.7965% ( 1) 00:18:56.672 19.153 - 19.247: 99.8200% ( 3) 00:18:56.672 19.247 - 19.342: 99.8278% ( 1) 00:18:56.672 19.342 - 19.437: 99.8356% ( 1) 00:18:56.672 19.627 - 19.721: 99.8435% ( 1) 00:18:56.672 19.721 - 19.816: 99.8513% ( 1) 00:18:56.672 20.670 - 20.764: 99.8591% ( 1) 00:18:56.672 22.376 - 22.471: 99.8669% ( 1) 00:18:56.672 23.135 - 23.230: 99.8748% ( 1) 00:18:56.672 23.324 - 23.419: 99.8904% ( 2) 00:18:56.672 23.514 - 23.609: 99.8982% ( 1) 00:18:56.672 25.790 - 25.979: 99.9061% ( 1) 00:18:56.672 3980.705 - 4004.978: 99.9843% ( 10) 00:18:56.672 4004.978 - 4029.250: 100.0000% ( 2) 00:18:56.672 00:18:56.672 Complete histogram 00:18:56.672 ================== 00:18:56.672 Range in us Cumulative Count 00:18:56.672 2.062 - 2.074: 7.8820% ( 1007) 00:18:56.672 2.074 - 2.086: 42.5016% ( 4423) 00:18:56.672 2.086 - 2.098: 45.1158% ( 334) 00:18:56.672 2.098 - 2.110: 51.7846% ( 852) 00:18:56.672 2.110 - 2.121: 58.2655% ( 828) 00:18:56.672 2.121 - 2.133: 59.6039% ( 171) 00:18:56.672 2.133 - 2.145: 67.7755% ( 1044) 00:18:56.672 2.145 - 2.157: 77.3403% ( 1222) 00:18:56.672 2.157 - 2.169: 78.1074% ( 98) 00:18:56.672 2.169 - 2.181: 80.7138% ( 333) 00:18:56.672 2.181 - 2.193: 82.5297% ( 232) 00:18:56.672 2.193 - 2.204: 83.0542% ( 67) 00:18:56.672 2.204 - 2.216: 85.5276% ( 316) 00:18:56.672 2.216 - 2.228: 88.9480% ( 437) 00:18:56.672 2.228 - 2.240: 90.8265% ( 240) 00:18:56.672 2.240 - 2.252: 92.4076% ( 202) 00:18:56.672 2.252 - 2.264: 93.3156% ( 116) 00:18:56.672 2.264 - 2.276: 93.5661% ( 32) 00:18:56.672 2.276 - 2.287: 93.9183% ( 45) 00:18:56.672 2.287 - 2.299: 94.2705% ( 45) 00:18:56.672 2.299 - 2.311: 94.8810% ( 78) 00:18:56.672 2.311 - 2.323: 95.2489% ( 47) 00:18:56.672 2.323 - 2.335: 95.3272% ( 10) 00:18:56.672 2.335 - 2.347: 95.3898% ( 8) 00:18:56.672 2.347 - 2.359: 95.4524% ( 8) 00:18:56.672 2.359 - 2.370: 95.6716% ( 28) 00:18:56.672 2.370 - 2.382: 95.8516% ( 23) 00:18:56.672 2.382 - 2.394: 96.1255% ( 35) 00:18:56.672 2.394 - 2.406: 96.4543% ( 42) 00:18:56.672 2.406 - 2.418: 96.6656% ( 27) 00:18:56.672 2.418 - 2.430: 96.8535% ( 24) 00:18:56.672 2.430 - 2.441: 97.1274% ( 35) 00:18:56.672 2.441 - 2.453: 97.2918% ( 21) 00:18:56.672 2.453 - 2.465: 97.4327% ( 18) 00:18:56.672 2.465 - 2.477: 97.5579% ( 16) 00:18:56.672 2.477 - 2.489: 97.7771% ( 28) 00:18:56.673 2.489 - 2.501: 97.9180% ( 18) 00:18:56.673 2.501 - 2.513: 97.9962% ( 10) 00:18:56.673 2.513 - 2.524: 98.0432% ( 6) 00:18:56.673 2.524 - 2.536: 98.1137% ( 9) 00:18:56.673 2.536 - 2.548: 98.1684% ( 7) 00:18:56.673 2.548 - 2.560: 98.1997% ( 4) 00:18:56.673 2.560 - 2.572: 98.2311% ( 4) 00:18:56.673 2.572 - 2.584: 98.2624% ( 4) 00:18:56.673 2.584 - 2.596: 98.2937% ( 4) 00:18:56.673 2.596 - 2.607: 98.3172% ( 3) 00:18:56.673 2.607 - 2.619: 98.3250% ( 1) 00:18:56.673 2.619 - 2.631: 98.3485% ( 3) 00:18:56.673 2.643 - 2.655: 98.3641% ( 2) 00:18:56.673 2.655 - 2.667: 98.3719% ( 1) 00:18:56.673 2.667 - 2.679: 98.3798% ( 1) 00:18:56.673 2.690 - 2.702: 98.3876% ( 1) 00:18:56.673 2.702 - 2.714: 98.3954% ( 1) 00:18:56.673 2.714 - 2.726: 98.4033% ( 1) 00:18:56.673 2.868 - 2.880: 98.4111% ( 1) 00:18:56.673 2.892 - 2.904: 98.4189% ( 1) 00:18:56.673 2.939 - 2.951: 98.4267% ( 1) 00:18:56.673 3.366 - 3.390: 98.4346% ( 1) 00:18:56.673 3.627 - 3.650: 98.4424% ( 1) 00:18:56.673 3.650 - 3.674: 98.4502% ( 1) 00:18:56.673 3.674 - 3.698: 98.4580% ( 1) 00:18:56.673 3.721 - 3.745: 98.4894% ( 4) 00:18:56.673 3.745 - 3.769: 98.4972% ( 1) 00:18:56.673 3.793 - 3.816: 98.5050% ( 1) 00:18:56.673 3.816 - 3.840: 98.5207% ( 2) 00:18:56.673 3.864 - 3.887: 98.5285% ( 1) 00:18:56.673 3.887 - 3.911: 9[2024-11-18 07:52:49.742291] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:56.932 8.5363% ( 1) 00:18:56.932 3.911 - 3.935: 98.5441% ( 1) 00:18:56.932 3.959 - 3.982: 98.5520% ( 1) 00:18:56.932 4.006 - 4.030: 98.5598% ( 1) 00:18:56.932 4.148 - 4.172: 98.5676% ( 1) 00:18:56.932 4.290 - 4.314: 98.5833% ( 2) 00:18:56.932 4.314 - 4.338: 98.5911% ( 1) 00:18:56.932 5.997 - 6.021: 98.5989% ( 1) 00:18:56.932 6.116 - 6.163: 98.6068% ( 1) 00:18:56.932 6.827 - 6.874: 98.6146% ( 1) 00:18:56.932 6.921 - 6.969: 98.6224% ( 1) 00:18:56.932 6.969 - 7.016: 98.6381% ( 2) 00:18:56.932 7.064 - 7.111: 98.6459% ( 1) 00:18:56.932 7.680 - 7.727: 98.6537% ( 1) 00:18:56.932 7.727 - 7.775: 98.6616% ( 1) 00:18:56.932 7.775 - 7.822: 98.6694% ( 1) 00:18:56.932 8.628 - 8.676: 98.6772% ( 1) 00:18:56.932 9.102 - 9.150: 98.6850% ( 1) 00:18:56.932 9.197 - 9.244: 98.6929% ( 1) 00:18:56.932 11.330 - 11.378: 98.7007% ( 1) 00:18:56.932 15.644 - 15.739: 98.7163% ( 2) 00:18:56.932 15.739 - 15.834: 98.7555% ( 5) 00:18:56.932 15.834 - 15.929: 98.7868% ( 4) 00:18:56.932 15.929 - 16.024: 98.8024% ( 2) 00:18:56.932 16.024 - 16.119: 98.8338% ( 4) 00:18:56.932 16.119 - 16.213: 98.8572% ( 3) 00:18:56.932 16.213 - 16.308: 98.9120% ( 7) 00:18:56.932 16.308 - 16.403: 98.9433% ( 4) 00:18:56.932 16.403 - 16.498: 98.9668% ( 3) 00:18:56.932 16.498 - 16.593: 99.0138% ( 6) 00:18:56.932 16.593 - 16.687: 99.0686% ( 7) 00:18:56.932 16.687 - 16.782: 99.0999% ( 4) 00:18:56.932 16.782 - 16.877: 99.1312% ( 4) 00:18:56.932 16.877 - 16.972: 99.1468% ( 2) 00:18:56.932 16.972 - 17.067: 99.1860% ( 5) 00:18:56.932 17.067 - 17.161: 99.2251% ( 5) 00:18:56.932 17.161 - 17.256: 99.2329% ( 1) 00:18:56.932 17.256 - 17.351: 99.2486% ( 2) 00:18:56.932 17.351 - 17.446: 99.2642% ( 2) 00:18:56.932 17.446 - 17.541: 99.2721% ( 1) 00:18:56.932 17.636 - 17.730: 99.2877% ( 2) 00:18:56.932 17.730 - 17.825: 99.3034% ( 2) 00:18:56.932 18.110 - 18.204: 99.3190% ( 2) 00:18:56.932 18.299 - 18.394: 99.3347% ( 2) 00:18:56.932 18.394 - 18.489: 99.3425% ( 1) 00:18:56.932 18.679 - 18.773: 99.3503% ( 1) 00:18:56.932 18.773 - 18.868: 99.3582% ( 1) 00:18:56.932 19.153 - 19.247: 99.3660% ( 1) 00:18:56.932 19.247 - 19.342: 99.3738% ( 1) 00:18:56.932 19.342 - 19.437: 99.3817% ( 1) 00:18:56.932 20.954 - 21.049: 99.3895% ( 1) 00:18:56.932 3980.705 - 4004.978: 99.7730% ( 49) 00:18:56.932 4004.978 - 4029.250: 99.9843% ( 27) 00:18:56.932 7961.410 - 8009.956: 100.0000% ( 2) 00:18:56.932 00:18:56.932 07:52:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:56.932 07:52:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:56.932 07:52:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:56.932 07:52:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:56.932 07:52:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:57.191 [ 00:18:57.191 { 00:18:57.191 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:57.191 "subtype": "Discovery", 00:18:57.191 "listen_addresses": [], 00:18:57.191 "allow_any_host": true, 00:18:57.191 "hosts": [] 00:18:57.191 }, 00:18:57.191 { 00:18:57.191 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:57.191 "subtype": "NVMe", 00:18:57.191 "listen_addresses": [ 00:18:57.191 { 00:18:57.191 "trtype": "VFIOUSER", 00:18:57.191 "adrfam": "IPv4", 00:18:57.191 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:57.191 "trsvcid": "0" 00:18:57.191 } 00:18:57.191 ], 00:18:57.191 "allow_any_host": true, 00:18:57.191 "hosts": [], 00:18:57.191 "serial_number": "SPDK1", 00:18:57.191 "model_number": "SPDK bdev Controller", 00:18:57.191 "max_namespaces": 32, 00:18:57.191 "min_cntlid": 1, 00:18:57.191 "max_cntlid": 65519, 00:18:57.191 "namespaces": [ 00:18:57.191 { 00:18:57.191 "nsid": 1, 00:18:57.191 "bdev_name": "Malloc1", 00:18:57.191 "name": "Malloc1", 00:18:57.191 "nguid": "30DCAE0437DA406FACFFFE4097CAD7C8", 00:18:57.191 "uuid": "30dcae04-37da-406f-acff-fe4097cad7c8" 00:18:57.191 }, 00:18:57.191 { 00:18:57.191 "nsid": 2, 00:18:57.191 "bdev_name": "Malloc3", 00:18:57.191 "name": "Malloc3", 00:18:57.191 "nguid": "C4D7FEEF613347CDB3CA9F99E6D3E936", 00:18:57.191 "uuid": "c4d7feef-6133-47cd-b3ca-9f99e6d3e936" 00:18:57.191 } 00:18:57.191 ] 00:18:57.191 }, 00:18:57.191 { 00:18:57.191 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:57.191 "subtype": "NVMe", 00:18:57.191 "listen_addresses": [ 00:18:57.191 { 00:18:57.191 "trtype": "VFIOUSER", 00:18:57.191 "adrfam": "IPv4", 00:18:57.191 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:57.191 "trsvcid": "0" 00:18:57.191 } 00:18:57.191 ], 00:18:57.191 "allow_any_host": true, 00:18:57.191 "hosts": [], 00:18:57.191 "serial_number": "SPDK2", 00:18:57.191 "model_number": "SPDK bdev Controller", 00:18:57.191 "max_namespaces": 32, 00:18:57.191 "min_cntlid": 1, 00:18:57.191 "max_cntlid": 65519, 00:18:57.191 "namespaces": [ 00:18:57.191 { 00:18:57.191 "nsid": 1, 00:18:57.191 "bdev_name": "Malloc2", 00:18:57.191 "name": "Malloc2", 00:18:57.191 "nguid": "B2808F0109D243B18F6DF45C51990C8D", 00:18:57.191 "uuid": "b2808f01-09d2-43b1-8f6d-f45c51990c8d" 00:18:57.191 } 00:18:57.191 ] 00:18:57.191 } 00:18:57.191 ] 00:18:57.191 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:57.191 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=726868 00:18:57.191 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:57.191 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:57.191 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:57.191 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:57.191 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:57.191 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:57.191 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:57.191 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:57.449 [2024-11-18 07:52:50.281835] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:57.449 Malloc4 00:18:57.449 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:57.708 [2024-11-18 07:52:50.708007] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:57.708 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:57.708 Asynchronous Event Request test 00:18:57.708 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:57.708 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:57.708 Registering asynchronous event callbacks... 00:18:57.708 Starting namespace attribute notice tests for all controllers... 00:18:57.708 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:57.708 aer_cb - Changed Namespace 00:18:57.708 Cleaning up... 00:18:57.966 [ 00:18:57.966 { 00:18:57.966 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:57.966 "subtype": "Discovery", 00:18:57.966 "listen_addresses": [], 00:18:57.966 "allow_any_host": true, 00:18:57.966 "hosts": [] 00:18:57.966 }, 00:18:57.966 { 00:18:57.966 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:57.966 "subtype": "NVMe", 00:18:57.966 "listen_addresses": [ 00:18:57.966 { 00:18:57.966 "trtype": "VFIOUSER", 00:18:57.966 "adrfam": "IPv4", 00:18:57.966 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:57.966 "trsvcid": "0" 00:18:57.966 } 00:18:57.966 ], 00:18:57.966 "allow_any_host": true, 00:18:57.966 "hosts": [], 00:18:57.966 "serial_number": "SPDK1", 00:18:57.966 "model_number": "SPDK bdev Controller", 00:18:57.966 "max_namespaces": 32, 00:18:57.966 "min_cntlid": 1, 00:18:57.966 "max_cntlid": 65519, 00:18:57.966 "namespaces": [ 00:18:57.966 { 00:18:57.966 "nsid": 1, 00:18:57.966 "bdev_name": "Malloc1", 00:18:57.966 "name": "Malloc1", 00:18:57.966 "nguid": "30DCAE0437DA406FACFFFE4097CAD7C8", 00:18:57.966 "uuid": "30dcae04-37da-406f-acff-fe4097cad7c8" 00:18:57.966 }, 00:18:57.966 { 00:18:57.966 "nsid": 2, 00:18:57.966 "bdev_name": "Malloc3", 00:18:57.966 "name": "Malloc3", 00:18:57.966 "nguid": "C4D7FEEF613347CDB3CA9F99E6D3E936", 00:18:57.966 "uuid": "c4d7feef-6133-47cd-b3ca-9f99e6d3e936" 00:18:57.966 } 00:18:57.966 ] 00:18:57.966 }, 00:18:57.966 { 00:18:57.966 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:57.966 "subtype": "NVMe", 00:18:57.966 "listen_addresses": [ 00:18:57.966 { 00:18:57.966 "trtype": "VFIOUSER", 00:18:57.966 "adrfam": "IPv4", 00:18:57.966 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:57.966 "trsvcid": "0" 00:18:57.966 } 00:18:57.966 ], 00:18:57.966 "allow_any_host": true, 00:18:57.966 "hosts": [], 00:18:57.966 "serial_number": "SPDK2", 00:18:57.966 "model_number": "SPDK bdev Controller", 00:18:57.966 "max_namespaces": 32, 00:18:57.966 "min_cntlid": 1, 00:18:57.966 "max_cntlid": 65519, 00:18:57.966 "namespaces": [ 00:18:57.966 { 00:18:57.966 "nsid": 1, 00:18:57.966 "bdev_name": "Malloc2", 00:18:57.966 "name": "Malloc2", 00:18:57.966 "nguid": "B2808F0109D243B18F6DF45C51990C8D", 00:18:57.966 "uuid": "b2808f01-09d2-43b1-8f6d-f45c51990c8d" 00:18:57.967 }, 00:18:57.967 { 00:18:57.967 "nsid": 2, 00:18:57.967 "bdev_name": "Malloc4", 00:18:57.967 "name": "Malloc4", 00:18:57.967 "nguid": "D1E139CBF6CB4067AE32AAD2917AD4B2", 00:18:57.967 "uuid": "d1e139cb-f6cb-4067-ae32-aad2917ad4b2" 00:18:57.967 } 00:18:57.967 ] 00:18:57.967 } 00:18:57.967 ] 00:18:57.967 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 726868 00:18:57.967 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:57.967 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 721264 00:18:57.967 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 721264 ']' 00:18:57.967 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 721264 00:18:57.967 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:57.967 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:57.967 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 721264 00:18:57.967 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:57.967 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:57.967 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 721264' 00:18:57.967 killing process with pid 721264 00:18:57.967 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 721264 00:18:57.967 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 721264 00:18:58.537 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:58.537 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:58.537 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:58.537 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:58.537 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:58.537 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=727006 00:18:58.537 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:58.537 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 727006' 00:18:58.537 Process pid: 727006 00:18:58.537 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:58.537 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 727006 00:18:58.537 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 727006 ']' 00:18:58.537 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.537 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.537 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.537 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.537 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:58.537 [2024-11-18 07:52:51.402774] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:58.537 [2024-11-18 07:52:51.403794] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:18:58.537 [2024-11-18 07:52:51.403863] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.537 [2024-11-18 07:52:51.470285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:58.537 [2024-11-18 07:52:51.512259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.537 [2024-11-18 07:52:51.512319] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.537 [2024-11-18 07:52:51.512347] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.537 [2024-11-18 07:52:51.512358] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.537 [2024-11-18 07:52:51.512367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.537 [2024-11-18 07:52:51.513816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.537 [2024-11-18 07:52:51.513889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.537 [2024-11-18 07:52:51.513953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:58.537 [2024-11-18 07:52:51.513955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.537 [2024-11-18 07:52:51.598424] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:58.537 [2024-11-18 07:52:51.598728] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:58.537 [2024-11-18 07:52:51.598941] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:58.537 [2024-11-18 07:52:51.599549] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:58.537 [2024-11-18 07:52:51.599809] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:58.796 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.796 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:58.796 07:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:59.735 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:59.993 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:59.993 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:59.993 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:59.993 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:59.993 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:00.252 Malloc1 00:19:00.252 07:52:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:00.510 07:52:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:00.767 07:52:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:01.025 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:01.025 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:01.025 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:01.592 Malloc2 00:19:01.592 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:01.592 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:01.850 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:02.417 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:02.417 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 727006 00:19:02.417 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 727006 ']' 00:19:02.417 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 727006 00:19:02.417 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:02.417 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.417 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 727006 00:19:02.417 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:02.417 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:02.417 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 727006' 00:19:02.417 killing process with pid 727006 00:19:02.417 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 727006 00:19:02.417 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 727006 00:19:02.417 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:02.417 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:02.417 00:19:02.417 real 0m53.451s 00:19:02.417 user 3m26.948s 00:19:02.417 sys 0m4.051s 00:19:02.417 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.417 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:02.417 ************************************ 00:19:02.417 END TEST nvmf_vfio_user 00:19:02.417 ************************************ 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:02.676 ************************************ 00:19:02.676 START TEST nvmf_vfio_user_nvme_compliance 00:19:02.676 ************************************ 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:02.676 * Looking for test storage... 00:19:02.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:02.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.676 --rc genhtml_branch_coverage=1 00:19:02.676 --rc genhtml_function_coverage=1 00:19:02.676 --rc genhtml_legend=1 00:19:02.676 --rc geninfo_all_blocks=1 00:19:02.676 --rc geninfo_unexecuted_blocks=1 00:19:02.676 00:19:02.676 ' 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:02.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.676 --rc genhtml_branch_coverage=1 00:19:02.676 --rc genhtml_function_coverage=1 00:19:02.676 --rc genhtml_legend=1 00:19:02.676 --rc geninfo_all_blocks=1 00:19:02.676 --rc geninfo_unexecuted_blocks=1 00:19:02.676 00:19:02.676 ' 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:02.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.676 --rc genhtml_branch_coverage=1 00:19:02.676 --rc genhtml_function_coverage=1 00:19:02.676 --rc genhtml_legend=1 00:19:02.676 --rc geninfo_all_blocks=1 00:19:02.676 --rc geninfo_unexecuted_blocks=1 00:19:02.676 00:19:02.676 ' 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:02.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.676 --rc genhtml_branch_coverage=1 00:19:02.676 --rc genhtml_function_coverage=1 00:19:02.676 --rc genhtml_legend=1 00:19:02.676 --rc geninfo_all_blocks=1 00:19:02.676 --rc geninfo_unexecuted_blocks=1 00:19:02.676 00:19:02.676 ' 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.676 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:02.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=727618 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 727618' 00:19:02.677 Process pid: 727618 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 727618 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 727618 ']' 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.677 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:02.937 [2024-11-18 07:52:55.776080] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:19:02.937 [2024-11-18 07:52:55.776162] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.937 [2024-11-18 07:52:55.843056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:02.937 [2024-11-18 07:52:55.886560] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.937 [2024-11-18 07:52:55.886619] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.937 [2024-11-18 07:52:55.886649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.937 [2024-11-18 07:52:55.886661] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.937 [2024-11-18 07:52:55.886670] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.937 [2024-11-18 07:52:55.887952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.937 [2024-11-18 07:52:55.888020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.937 [2024-11-18 07:52:55.888023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.937 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.937 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:19:02.937 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:04.314 malloc0 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.314 07:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:04.314 00:19:04.314 00:19:04.314 CUnit - A unit testing framework for C - Version 2.1-3 00:19:04.314 http://cunit.sourceforge.net/ 00:19:04.314 00:19:04.314 00:19:04.314 Suite: nvme_compliance 00:19:04.314 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-18 07:52:57.256033] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.314 [2024-11-18 07:52:57.257544] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:04.314 [2024-11-18 07:52:57.257569] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:04.314 [2024-11-18 07:52:57.257581] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:04.314 [2024-11-18 07:52:57.259050] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.314 passed 00:19:04.314 Test: admin_identify_ctrlr_verify_fused ...[2024-11-18 07:52:57.344651] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.314 [2024-11-18 07:52:57.347674] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.314 passed 00:19:04.572 Test: admin_identify_ns ...[2024-11-18 07:52:57.433059] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.572 [2024-11-18 07:52:57.493512] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:04.572 [2024-11-18 07:52:57.501521] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:04.572 [2024-11-18 07:52:57.522634] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.572 passed 00:19:04.572 Test: admin_get_features_mandatory_features ...[2024-11-18 07:52:57.608759] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.572 [2024-11-18 07:52:57.611784] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.572 passed 00:19:04.831 Test: admin_get_features_optional_features ...[2024-11-18 07:52:57.694323] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.831 [2024-11-18 07:52:57.697339] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.831 passed 00:19:04.831 Test: admin_set_features_number_of_queues ...[2024-11-18 07:52:57.781556] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.831 [2024-11-18 07:52:57.887591] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.089 passed 00:19:05.089 Test: admin_get_log_page_mandatory_logs ...[2024-11-18 07:52:57.971377] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.089 [2024-11-18 07:52:57.974401] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.089 passed 00:19:05.089 Test: admin_get_log_page_with_lpo ...[2024-11-18 07:52:58.056661] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.089 [2024-11-18 07:52:58.124524] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:05.089 [2024-11-18 07:52:58.137571] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.089 passed 00:19:05.347 Test: fabric_property_get ...[2024-11-18 07:52:58.221315] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.347 [2024-11-18 07:52:58.222618] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:05.347 [2024-11-18 07:52:58.224337] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.347 passed 00:19:05.347 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-18 07:52:58.308924] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.347 [2024-11-18 07:52:58.310229] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:05.347 [2024-11-18 07:52:58.311941] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.347 passed 00:19:05.347 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-18 07:52:58.395088] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.606 [2024-11-18 07:52:58.479518] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:05.606 [2024-11-18 07:52:58.495502] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:05.607 [2024-11-18 07:52:58.500612] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.607 passed 00:19:05.607 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-18 07:52:58.581606] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.607 [2024-11-18 07:52:58.582942] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:05.607 [2024-11-18 07:52:58.584642] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.607 passed 00:19:05.607 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-18 07:52:58.667828] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.867 [2024-11-18 07:52:58.747499] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:05.867 [2024-11-18 07:52:58.771499] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:05.867 [2024-11-18 07:52:58.776615] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.867 passed 00:19:05.867 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-18 07:52:58.857238] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.867 [2024-11-18 07:52:58.858548] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:05.867 [2024-11-18 07:52:58.858603] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:05.867 [2024-11-18 07:52:58.862272] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.867 passed 00:19:05.867 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-18 07:52:58.944069] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:06.127 [2024-11-18 07:52:59.036518] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:06.127 [2024-11-18 07:52:59.044514] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:06.127 [2024-11-18 07:52:59.052502] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:06.127 [2024-11-18 07:52:59.060504] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:06.127 [2024-11-18 07:52:59.089602] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:06.127 passed 00:19:06.127 Test: admin_create_io_sq_verify_pc ...[2024-11-18 07:52:59.173254] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:06.127 [2024-11-18 07:52:59.189530] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:06.127 [2024-11-18 07:52:59.206681] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:06.386 passed 00:19:06.386 Test: admin_create_io_qp_max_qps ...[2024-11-18 07:52:59.289238] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:07.322 [2024-11-18 07:53:00.388511] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:19:07.892 [2024-11-18 07:53:00.791883] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:07.892 passed 00:19:07.892 Test: admin_create_io_sq_shared_cq ...[2024-11-18 07:53:00.874151] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:08.153 [2024-11-18 07:53:01.006500] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:08.153 [2024-11-18 07:53:01.043594] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:08.153 passed 00:19:08.153 00:19:08.153 Run Summary: Type Total Ran Passed Failed Inactive 00:19:08.153 suites 1 1 n/a 0 0 00:19:08.153 tests 18 18 18 0 0 00:19:08.153 asserts 360 360 360 0 n/a 00:19:08.153 00:19:08.153 Elapsed time = 1.572 seconds 00:19:08.153 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 727618 00:19:08.153 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 727618 ']' 00:19:08.153 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 727618 00:19:08.153 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:19:08.153 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:08.153 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 727618 00:19:08.153 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:08.153 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:08.153 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 727618' 00:19:08.153 killing process with pid 727618 00:19:08.153 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 727618 00:19:08.153 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 727618 00:19:08.412 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:08.412 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:08.412 00:19:08.412 real 0m5.811s 00:19:08.412 user 0m16.375s 00:19:08.412 sys 0m0.563s 00:19:08.412 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:08.412 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:08.412 ************************************ 00:19:08.412 END TEST nvmf_vfio_user_nvme_compliance 00:19:08.412 ************************************ 00:19:08.412 07:53:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:08.412 07:53:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:08.412 07:53:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:08.412 07:53:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:08.412 ************************************ 00:19:08.412 START TEST nvmf_vfio_user_fuzz 00:19:08.412 ************************************ 00:19:08.412 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:08.412 * Looking for test storage... 00:19:08.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:08.412 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:08.412 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:19:08.412 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:08.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.672 --rc genhtml_branch_coverage=1 00:19:08.672 --rc genhtml_function_coverage=1 00:19:08.672 --rc genhtml_legend=1 00:19:08.672 --rc geninfo_all_blocks=1 00:19:08.672 --rc geninfo_unexecuted_blocks=1 00:19:08.672 00:19:08.672 ' 00:19:08.672 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:08.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.672 --rc genhtml_branch_coverage=1 00:19:08.672 --rc genhtml_function_coverage=1 00:19:08.672 --rc genhtml_legend=1 00:19:08.672 --rc geninfo_all_blocks=1 00:19:08.672 --rc geninfo_unexecuted_blocks=1 00:19:08.672 00:19:08.672 ' 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:08.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.673 --rc genhtml_branch_coverage=1 00:19:08.673 --rc genhtml_function_coverage=1 00:19:08.673 --rc genhtml_legend=1 00:19:08.673 --rc geninfo_all_blocks=1 00:19:08.673 --rc geninfo_unexecuted_blocks=1 00:19:08.673 00:19:08.673 ' 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:08.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.673 --rc genhtml_branch_coverage=1 00:19:08.673 --rc genhtml_function_coverage=1 00:19:08.673 --rc genhtml_legend=1 00:19:08.673 --rc geninfo_all_blocks=1 00:19:08.673 --rc geninfo_unexecuted_blocks=1 00:19:08.673 00:19:08.673 ' 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:08.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=728342 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 728342' 00:19:08.673 Process pid: 728342 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 728342 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 728342 ']' 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.673 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:08.932 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.932 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:19:08.932 07:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:09.870 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:09.870 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.870 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:09.870 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.870 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:09.870 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:09.870 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.870 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:09.870 malloc0 00:19:09.870 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.870 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:09.870 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.870 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:09.870 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.870 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:09.870 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.870 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:09.870 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.870 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:09.871 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.871 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:09.871 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.871 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:09.871 07:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:41.946 Fuzzing completed. Shutting down the fuzz application 00:19:41.946 00:19:41.946 Dumping successful admin opcodes: 00:19:41.946 8, 9, 10, 24, 00:19:41.946 Dumping successful io opcodes: 00:19:41.946 0, 00:19:41.946 NS: 0x20000081ef00 I/O qp, Total commands completed: 647406, total successful commands: 2512, random_seed: 3869503936 00:19:41.946 NS: 0x20000081ef00 admin qp, Total commands completed: 82432, total successful commands: 658, random_seed: 103690816 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 728342 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 728342 ']' 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 728342 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 728342 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 728342' 00:19:41.946 killing process with pid 728342 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 728342 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 728342 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:41.946 00:19:41.946 real 0m32.139s 00:19:41.946 user 0m30.254s 00:19:41.946 sys 0m29.297s 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:41.946 ************************************ 00:19:41.946 END TEST nvmf_vfio_user_fuzz 00:19:41.946 ************************************ 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:41.946 ************************************ 00:19:41.946 START TEST nvmf_auth_target 00:19:41.946 ************************************ 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:41.946 * Looking for test storage... 00:19:41.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.946 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:41.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.946 --rc genhtml_branch_coverage=1 00:19:41.947 --rc genhtml_function_coverage=1 00:19:41.947 --rc genhtml_legend=1 00:19:41.947 --rc geninfo_all_blocks=1 00:19:41.947 --rc geninfo_unexecuted_blocks=1 00:19:41.947 00:19:41.947 ' 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:41.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.947 --rc genhtml_branch_coverage=1 00:19:41.947 --rc genhtml_function_coverage=1 00:19:41.947 --rc genhtml_legend=1 00:19:41.947 --rc geninfo_all_blocks=1 00:19:41.947 --rc geninfo_unexecuted_blocks=1 00:19:41.947 00:19:41.947 ' 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:41.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.947 --rc genhtml_branch_coverage=1 00:19:41.947 --rc genhtml_function_coverage=1 00:19:41.947 --rc genhtml_legend=1 00:19:41.947 --rc geninfo_all_blocks=1 00:19:41.947 --rc geninfo_unexecuted_blocks=1 00:19:41.947 00:19:41.947 ' 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:41.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.947 --rc genhtml_branch_coverage=1 00:19:41.947 --rc genhtml_function_coverage=1 00:19:41.947 --rc genhtml_legend=1 00:19:41.947 --rc geninfo_all_blocks=1 00:19:41.947 --rc geninfo_unexecuted_blocks=1 00:19:41.947 00:19:41.947 ' 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:41.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:41.947 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:41.948 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:41.948 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:41.948 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:41.948 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.948 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:41.948 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:41.948 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:41.948 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.948 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.948 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.948 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:41.948 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:41.948 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:41.948 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.944 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:42.944 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:42.944 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:42.944 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:42.944 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:42.944 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:42.944 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:42.944 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:42.945 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:42.945 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:42.945 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:42.945 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:42.945 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:42.945 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:42.945 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:42.945 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:42.945 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:42.945 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:42.945 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:42.945 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:42.945 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:43.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:19:43.204 00:19:43.204 --- 10.0.0.2 ping statistics --- 00:19:43.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.204 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:43.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:43.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:19:43.204 00:19:43.204 --- 10.0.0.1 ping statistics --- 00:19:43.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.204 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=733794 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 733794 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 733794 ']' 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:43.204 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=733820 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b14ce1885d17b743c5349f0d6b181219cc73fac689ca04d8 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.FOL 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b14ce1885d17b743c5349f0d6b181219cc73fac689ca04d8 0 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b14ce1885d17b743c5349f0d6b181219cc73fac689ca04d8 0 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b14ce1885d17b743c5349f0d6b181219cc73fac689ca04d8 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.FOL 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.FOL 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.FOL 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:43.463 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e6ac0946b9d216bd1638bc2f5e09d97ba7a1af187aa8f25cfaba66c76e0aaaba 00:19:43.464 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:43.464 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.jKN 00:19:43.464 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e6ac0946b9d216bd1638bc2f5e09d97ba7a1af187aa8f25cfaba66c76e0aaaba 3 00:19:43.464 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e6ac0946b9d216bd1638bc2f5e09d97ba7a1af187aa8f25cfaba66c76e0aaaba 3 00:19:43.464 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:43.464 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:43.464 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e6ac0946b9d216bd1638bc2f5e09d97ba7a1af187aa8f25cfaba66c76e0aaaba 00:19:43.464 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:43.464 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:43.722 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.jKN 00:19:43.722 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.jKN 00:19:43.722 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.jKN 00:19:43.722 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3e59ec2fcf7eb54481ff370ae643c30f 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Kvz 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3e59ec2fcf7eb54481ff370ae643c30f 1 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3e59ec2fcf7eb54481ff370ae643c30f 1 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3e59ec2fcf7eb54481ff370ae643c30f 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Kvz 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Kvz 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Kvz 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=62e308616ca621642d1a605d543102fd0fdada8d8e7f0548 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.3io 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 62e308616ca621642d1a605d543102fd0fdada8d8e7f0548 2 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 62e308616ca621642d1a605d543102fd0fdada8d8e7f0548 2 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=62e308616ca621642d1a605d543102fd0fdada8d8e7f0548 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.3io 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.3io 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.3io 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2b0837c24f4efa37ec5fd284354befaeff401af9aa98df01 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.uc7 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2b0837c24f4efa37ec5fd284354befaeff401af9aa98df01 2 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2b0837c24f4efa37ec5fd284354befaeff401af9aa98df01 2 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2b0837c24f4efa37ec5fd284354befaeff401af9aa98df01 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.uc7 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.uc7 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.uc7 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e31943572371ee4ceaf2bd9797528bed 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Nnm 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e31943572371ee4ceaf2bd9797528bed 1 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e31943572371ee4ceaf2bd9797528bed 1 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e31943572371ee4ceaf2bd9797528bed 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Nnm 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Nnm 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Nnm 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a9aec16883aaeaebf5af7100e7e84e7f33160a516abda25f95c1e68f2cbba552 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ok6 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a9aec16883aaeaebf5af7100e7e84e7f33160a516abda25f95c1e68f2cbba552 3 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a9aec16883aaeaebf5af7100e7e84e7f33160a516abda25f95c1e68f2cbba552 3 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a9aec16883aaeaebf5af7100e7e84e7f33160a516abda25f95c1e68f2cbba552 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:43.723 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:43.982 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ok6 00:19:43.982 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ok6 00:19:43.982 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.ok6 00:19:43.982 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:43.982 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 733794 00:19:43.982 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 733794 ']' 00:19:43.982 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.982 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:43.982 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.982 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:43.982 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.240 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.240 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:44.240 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 733820 /var/tmp/host.sock 00:19:44.240 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 733820 ']' 00:19:44.240 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:44.240 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.240 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:44.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:44.240 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.240 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.498 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.498 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:44.498 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:44.498 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.498 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.498 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.498 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:44.498 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FOL 00:19:44.498 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.498 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.498 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.498 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.FOL 00:19:44.498 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.FOL 00:19:44.756 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.jKN ]] 00:19:44.756 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jKN 00:19:44.756 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.756 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.756 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.756 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jKN 00:19:44.756 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jKN 00:19:45.014 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:45.014 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Kvz 00:19:45.014 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.014 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.014 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.014 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Kvz 00:19:45.014 07:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Kvz 00:19:45.272 07:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.3io ]] 00:19:45.272 07:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3io 00:19:45.272 07:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.272 07:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.272 07:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.272 07:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3io 00:19:45.272 07:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3io 00:19:45.530 07:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:45.530 07:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.uc7 00:19:45.530 07:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.530 07:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.530 07:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.530 07:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.uc7 00:19:45.530 07:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.uc7 00:19:45.788 07:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Nnm ]] 00:19:45.788 07:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Nnm 00:19:45.788 07:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.788 07:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.788 07:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.788 07:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Nnm 00:19:45.788 07:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Nnm 00:19:46.047 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:46.047 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ok6 00:19:46.047 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.047 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.047 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.047 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ok6 00:19:46.047 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ok6 00:19:46.378 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:46.378 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:46.378 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.378 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.378 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:46.378 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:46.635 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:46.635 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.635 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.635 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:46.635 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:46.635 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.635 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.635 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.635 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.635 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.635 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.635 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.635 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.893 00:19:46.893 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.893 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.893 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.152 07:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.411 07:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.411 07:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.411 07:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.411 07:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.411 07:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.411 { 00:19:47.411 "cntlid": 1, 00:19:47.411 "qid": 0, 00:19:47.411 "state": "enabled", 00:19:47.411 "thread": "nvmf_tgt_poll_group_000", 00:19:47.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:47.411 "listen_address": { 00:19:47.411 "trtype": "TCP", 00:19:47.411 "adrfam": "IPv4", 00:19:47.411 "traddr": "10.0.0.2", 00:19:47.411 "trsvcid": "4420" 00:19:47.411 }, 00:19:47.411 "peer_address": { 00:19:47.411 "trtype": "TCP", 00:19:47.411 "adrfam": "IPv4", 00:19:47.411 "traddr": "10.0.0.1", 00:19:47.411 "trsvcid": "34518" 00:19:47.411 }, 00:19:47.411 "auth": { 00:19:47.411 "state": "completed", 00:19:47.411 "digest": "sha256", 00:19:47.411 "dhgroup": "null" 00:19:47.411 } 00:19:47.411 } 00:19:47.411 ]' 00:19:47.411 07:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.411 07:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.411 07:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.411 07:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:47.411 07:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.411 07:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.411 07:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.411 07:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.670 07:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:19:47.670 07:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:19:48.607 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.607 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.607 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.607 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.607 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.607 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.607 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:48.607 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:48.866 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:48.866 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.866 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.866 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:48.866 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:48.866 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.866 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.866 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.866 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.866 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.866 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.866 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.866 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.125 00:19:49.125 07:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.125 07:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.125 07:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.383 07:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.383 07:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.383 07:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.383 07:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.383 07:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.383 07:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.383 { 00:19:49.383 "cntlid": 3, 00:19:49.383 "qid": 0, 00:19:49.383 "state": "enabled", 00:19:49.383 "thread": "nvmf_tgt_poll_group_000", 00:19:49.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:49.383 "listen_address": { 00:19:49.383 "trtype": "TCP", 00:19:49.383 "adrfam": "IPv4", 00:19:49.383 "traddr": "10.0.0.2", 00:19:49.383 "trsvcid": "4420" 00:19:49.383 }, 00:19:49.383 "peer_address": { 00:19:49.383 "trtype": "TCP", 00:19:49.383 "adrfam": "IPv4", 00:19:49.383 "traddr": "10.0.0.1", 00:19:49.383 "trsvcid": "34552" 00:19:49.383 }, 00:19:49.383 "auth": { 00:19:49.383 "state": "completed", 00:19:49.383 "digest": "sha256", 00:19:49.383 "dhgroup": "null" 00:19:49.383 } 00:19:49.383 } 00:19:49.383 ]' 00:19:49.383 07:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.642 07:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.642 07:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.642 07:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:49.642 07:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.642 07:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.642 07:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.642 07:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.900 07:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:19:49.900 07:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:19:50.838 07:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.838 07:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.838 07:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.838 07:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.838 07:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.838 07:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.838 07:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:50.838 07:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:51.096 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:51.096 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.096 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.096 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:51.096 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:51.096 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.096 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.096 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.096 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.096 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.097 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.097 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.097 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.355 00:19:51.355 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.355 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.355 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.613 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.613 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.613 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.613 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.613 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.613 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.613 { 00:19:51.613 "cntlid": 5, 00:19:51.613 "qid": 0, 00:19:51.613 "state": "enabled", 00:19:51.613 "thread": "nvmf_tgt_poll_group_000", 00:19:51.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:51.613 "listen_address": { 00:19:51.613 "trtype": "TCP", 00:19:51.613 "adrfam": "IPv4", 00:19:51.613 "traddr": "10.0.0.2", 00:19:51.613 "trsvcid": "4420" 00:19:51.613 }, 00:19:51.613 "peer_address": { 00:19:51.613 "trtype": "TCP", 00:19:51.613 "adrfam": "IPv4", 00:19:51.613 "traddr": "10.0.0.1", 00:19:51.613 "trsvcid": "34584" 00:19:51.613 }, 00:19:51.613 "auth": { 00:19:51.613 "state": "completed", 00:19:51.613 "digest": "sha256", 00:19:51.613 "dhgroup": "null" 00:19:51.613 } 00:19:51.613 } 00:19:51.613 ]' 00:19:51.613 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.613 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.613 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.871 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:51.871 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.871 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.871 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.871 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.129 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:19:52.129 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:19:53.063 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.064 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.064 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.064 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.064 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.064 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.064 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:53.064 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:53.322 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:53.322 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.322 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:53.322 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:53.322 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:53.322 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.322 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:53.322 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.322 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.322 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.322 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:53.322 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.322 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.580 00:19:53.580 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.580 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.580 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.838 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.838 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.838 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.838 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.838 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.838 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.838 { 00:19:53.838 "cntlid": 7, 00:19:53.838 "qid": 0, 00:19:53.838 "state": "enabled", 00:19:53.838 "thread": "nvmf_tgt_poll_group_000", 00:19:53.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:53.838 "listen_address": { 00:19:53.838 "trtype": "TCP", 00:19:53.838 "adrfam": "IPv4", 00:19:53.838 "traddr": "10.0.0.2", 00:19:53.838 "trsvcid": "4420" 00:19:53.838 }, 00:19:53.838 "peer_address": { 00:19:53.838 "trtype": "TCP", 00:19:53.838 "adrfam": "IPv4", 00:19:53.838 "traddr": "10.0.0.1", 00:19:53.838 "trsvcid": "34614" 00:19:53.838 }, 00:19:53.838 "auth": { 00:19:53.838 "state": "completed", 00:19:53.838 "digest": "sha256", 00:19:53.838 "dhgroup": "null" 00:19:53.838 } 00:19:53.838 } 00:19:53.838 ]' 00:19:53.838 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.838 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.838 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.838 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:53.839 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.097 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.097 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.097 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.355 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:19:54.355 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:19:55.293 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.293 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.293 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.293 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.293 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.293 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.293 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.293 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.294 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.294 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:55.294 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.294 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.294 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:55.294 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:55.294 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.294 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.294 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.294 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.552 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.552 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.552 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.552 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.810 00:19:55.810 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.810 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.810 07:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.069 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.069 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.069 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.069 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.069 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.069 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.069 { 00:19:56.069 "cntlid": 9, 00:19:56.069 "qid": 0, 00:19:56.069 "state": "enabled", 00:19:56.069 "thread": "nvmf_tgt_poll_group_000", 00:19:56.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:56.069 "listen_address": { 00:19:56.069 "trtype": "TCP", 00:19:56.069 "adrfam": "IPv4", 00:19:56.069 "traddr": "10.0.0.2", 00:19:56.069 "trsvcid": "4420" 00:19:56.069 }, 00:19:56.069 "peer_address": { 00:19:56.069 "trtype": "TCP", 00:19:56.069 "adrfam": "IPv4", 00:19:56.069 "traddr": "10.0.0.1", 00:19:56.069 "trsvcid": "34630" 00:19:56.069 }, 00:19:56.069 "auth": { 00:19:56.069 "state": "completed", 00:19:56.069 "digest": "sha256", 00:19:56.069 "dhgroup": "ffdhe2048" 00:19:56.069 } 00:19:56.069 } 00:19:56.069 ]' 00:19:56.069 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.069 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.069 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.069 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:56.069 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.069 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.069 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.069 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.326 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:19:56.326 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:19:57.263 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.263 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.263 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.263 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.263 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.263 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.263 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.263 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.521 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:57.521 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.521 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.521 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:57.521 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:57.521 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.521 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.521 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.521 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.521 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.521 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.521 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.521 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.090 00:19:58.090 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.090 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.090 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.348 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.348 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.348 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.348 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.348 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.348 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.348 { 00:19:58.348 "cntlid": 11, 00:19:58.348 "qid": 0, 00:19:58.348 "state": "enabled", 00:19:58.348 "thread": "nvmf_tgt_poll_group_000", 00:19:58.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:58.348 "listen_address": { 00:19:58.348 "trtype": "TCP", 00:19:58.348 "adrfam": "IPv4", 00:19:58.348 "traddr": "10.0.0.2", 00:19:58.348 "trsvcid": "4420" 00:19:58.348 }, 00:19:58.348 "peer_address": { 00:19:58.348 "trtype": "TCP", 00:19:58.348 "adrfam": "IPv4", 00:19:58.348 "traddr": "10.0.0.1", 00:19:58.348 "trsvcid": "41058" 00:19:58.348 }, 00:19:58.348 "auth": { 00:19:58.348 "state": "completed", 00:19:58.348 "digest": "sha256", 00:19:58.348 "dhgroup": "ffdhe2048" 00:19:58.348 } 00:19:58.348 } 00:19:58.348 ]' 00:19:58.348 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.348 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.348 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.348 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:58.348 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.348 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.348 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.348 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.607 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:19:58.607 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:19:59.544 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.544 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.544 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.544 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.544 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.544 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.544 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.544 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.802 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:59.802 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.802 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:59.802 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:59.802 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:59.802 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.802 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.802 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.802 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.802 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.802 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.802 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.802 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.060 00:20:00.061 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.061 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.061 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.627 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.627 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.627 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.627 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.627 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.627 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.627 { 00:20:00.627 "cntlid": 13, 00:20:00.627 "qid": 0, 00:20:00.627 "state": "enabled", 00:20:00.627 "thread": "nvmf_tgt_poll_group_000", 00:20:00.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:00.627 "listen_address": { 00:20:00.627 "trtype": "TCP", 00:20:00.627 "adrfam": "IPv4", 00:20:00.627 "traddr": "10.0.0.2", 00:20:00.627 "trsvcid": "4420" 00:20:00.627 }, 00:20:00.627 "peer_address": { 00:20:00.627 "trtype": "TCP", 00:20:00.627 "adrfam": "IPv4", 00:20:00.627 "traddr": "10.0.0.1", 00:20:00.627 "trsvcid": "41094" 00:20:00.627 }, 00:20:00.627 "auth": { 00:20:00.627 "state": "completed", 00:20:00.627 "digest": "sha256", 00:20:00.627 "dhgroup": "ffdhe2048" 00:20:00.627 } 00:20:00.627 } 00:20:00.627 ]' 00:20:00.627 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.627 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.627 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.627 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:00.627 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.627 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.627 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.627 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.885 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:20:00.885 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:20:01.822 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.822 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.822 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.822 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.822 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.822 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.822 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:01.822 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:02.081 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:02.081 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.081 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.081 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:02.081 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:02.081 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.082 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:02.082 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.082 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.082 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.082 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:02.082 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.082 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.341 00:20:02.341 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.341 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.341 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.599 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.599 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.599 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.599 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.599 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.599 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.599 { 00:20:02.599 "cntlid": 15, 00:20:02.599 "qid": 0, 00:20:02.599 "state": "enabled", 00:20:02.599 "thread": "nvmf_tgt_poll_group_000", 00:20:02.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:02.599 "listen_address": { 00:20:02.599 "trtype": "TCP", 00:20:02.599 "adrfam": "IPv4", 00:20:02.599 "traddr": "10.0.0.2", 00:20:02.599 "trsvcid": "4420" 00:20:02.599 }, 00:20:02.599 "peer_address": { 00:20:02.599 "trtype": "TCP", 00:20:02.599 "adrfam": "IPv4", 00:20:02.599 "traddr": "10.0.0.1", 00:20:02.599 "trsvcid": "41112" 00:20:02.599 }, 00:20:02.599 "auth": { 00:20:02.599 "state": "completed", 00:20:02.599 "digest": "sha256", 00:20:02.599 "dhgroup": "ffdhe2048" 00:20:02.599 } 00:20:02.599 } 00:20:02.599 ]' 00:20:02.599 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.599 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.599 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.857 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:02.857 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.858 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.858 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.858 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.115 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:20:03.115 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:20:04.052 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.052 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.052 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.052 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.052 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.052 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.052 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.052 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.052 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.312 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:04.312 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.312 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:04.312 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:04.312 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:04.312 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.312 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.312 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.312 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.312 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.312 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.312 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.312 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.608 00:20:04.608 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.608 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.608 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.892 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.892 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.892 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.892 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.892 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.892 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.892 { 00:20:04.892 "cntlid": 17, 00:20:04.892 "qid": 0, 00:20:04.892 "state": "enabled", 00:20:04.892 "thread": "nvmf_tgt_poll_group_000", 00:20:04.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:04.892 "listen_address": { 00:20:04.892 "trtype": "TCP", 00:20:04.892 "adrfam": "IPv4", 00:20:04.892 "traddr": "10.0.0.2", 00:20:04.892 "trsvcid": "4420" 00:20:04.892 }, 00:20:04.892 "peer_address": { 00:20:04.892 "trtype": "TCP", 00:20:04.892 "adrfam": "IPv4", 00:20:04.892 "traddr": "10.0.0.1", 00:20:04.892 "trsvcid": "41146" 00:20:04.892 }, 00:20:04.892 "auth": { 00:20:04.892 "state": "completed", 00:20:04.892 "digest": "sha256", 00:20:04.892 "dhgroup": "ffdhe3072" 00:20:04.892 } 00:20:04.892 } 00:20:04.892 ]' 00:20:04.892 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.892 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.892 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.892 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:04.892 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.150 07:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.150 07:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.150 07:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.409 07:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:20:05.409 07:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:20:06.345 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.345 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.345 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.345 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.345 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.345 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.345 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:06.345 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:06.604 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:06.604 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.604 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:06.604 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:06.604 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:06.604 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.604 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.604 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.604 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.604 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.604 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.604 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.604 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.863 00:20:06.863 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.863 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.863 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.122 07:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.122 07:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.123 07:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.123 07:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.123 07:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.123 07:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.123 { 00:20:07.123 "cntlid": 19, 00:20:07.123 "qid": 0, 00:20:07.123 "state": "enabled", 00:20:07.123 "thread": "nvmf_tgt_poll_group_000", 00:20:07.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:07.123 "listen_address": { 00:20:07.123 "trtype": "TCP", 00:20:07.123 "adrfam": "IPv4", 00:20:07.123 "traddr": "10.0.0.2", 00:20:07.123 "trsvcid": "4420" 00:20:07.123 }, 00:20:07.123 "peer_address": { 00:20:07.123 "trtype": "TCP", 00:20:07.123 "adrfam": "IPv4", 00:20:07.123 "traddr": "10.0.0.1", 00:20:07.123 "trsvcid": "41172" 00:20:07.123 }, 00:20:07.123 "auth": { 00:20:07.123 "state": "completed", 00:20:07.123 "digest": "sha256", 00:20:07.123 "dhgroup": "ffdhe3072" 00:20:07.123 } 00:20:07.123 } 00:20:07.123 ]' 00:20:07.123 07:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.123 07:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.123 07:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.123 07:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:07.123 07:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.389 07:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.389 07:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.389 07:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.648 07:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:20:07.648 07:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:20:08.582 07:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.582 07:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.582 07:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.582 07:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.582 07:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.582 07:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.582 07:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:08.582 07:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:08.840 07:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:08.840 07:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.840 07:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:08.840 07:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:08.840 07:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:08.840 07:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.840 07:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.840 07:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.840 07:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.840 07:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.840 07:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.840 07:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.840 07:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.099 00:20:09.099 07:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.099 07:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.099 07:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.358 07:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.358 07:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.358 07:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.358 07:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.358 07:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.358 07:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.358 { 00:20:09.358 "cntlid": 21, 00:20:09.358 "qid": 0, 00:20:09.358 "state": "enabled", 00:20:09.358 "thread": "nvmf_tgt_poll_group_000", 00:20:09.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:09.358 "listen_address": { 00:20:09.358 "trtype": "TCP", 00:20:09.358 "adrfam": "IPv4", 00:20:09.358 "traddr": "10.0.0.2", 00:20:09.358 "trsvcid": "4420" 00:20:09.358 }, 00:20:09.358 "peer_address": { 00:20:09.358 "trtype": "TCP", 00:20:09.358 "adrfam": "IPv4", 00:20:09.358 "traddr": "10.0.0.1", 00:20:09.358 "trsvcid": "60402" 00:20:09.358 }, 00:20:09.358 "auth": { 00:20:09.358 "state": "completed", 00:20:09.358 "digest": "sha256", 00:20:09.358 "dhgroup": "ffdhe3072" 00:20:09.358 } 00:20:09.358 } 00:20:09.358 ]' 00:20:09.358 07:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.358 07:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.358 07:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.358 07:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:09.358 07:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.616 07:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.616 07:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.616 07:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.875 07:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:20:09.875 07:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:20:10.814 07:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.814 07:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.814 07:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.814 07:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.814 07:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.814 07:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.814 07:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:10.814 07:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:11.073 07:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:11.073 07:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.073 07:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:11.073 07:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:11.073 07:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:11.073 07:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.073 07:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:11.073 07:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.073 07:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.073 07:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.073 07:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:11.073 07:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.073 07:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.334 00:20:11.334 07:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.334 07:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.334 07:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.595 07:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.595 07:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.595 07:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.595 07:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.595 07:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.595 07:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.595 { 00:20:11.595 "cntlid": 23, 00:20:11.595 "qid": 0, 00:20:11.595 "state": "enabled", 00:20:11.595 "thread": "nvmf_tgt_poll_group_000", 00:20:11.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:11.595 "listen_address": { 00:20:11.595 "trtype": "TCP", 00:20:11.595 "adrfam": "IPv4", 00:20:11.595 "traddr": "10.0.0.2", 00:20:11.595 "trsvcid": "4420" 00:20:11.595 }, 00:20:11.595 "peer_address": { 00:20:11.595 "trtype": "TCP", 00:20:11.595 "adrfam": "IPv4", 00:20:11.595 "traddr": "10.0.0.1", 00:20:11.595 "trsvcid": "60438" 00:20:11.595 }, 00:20:11.595 "auth": { 00:20:11.595 "state": "completed", 00:20:11.595 "digest": "sha256", 00:20:11.595 "dhgroup": "ffdhe3072" 00:20:11.595 } 00:20:11.595 } 00:20:11.595 ]' 00:20:11.595 07:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.595 07:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.596 07:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.596 07:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:11.596 07:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.854 07:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.854 07:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.854 07:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.112 07:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:20:12.112 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:20:13.048 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.048 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.048 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.048 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.048 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.048 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.048 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.048 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:13.048 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:13.306 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:13.306 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.306 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:13.306 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:13.306 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:13.306 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.306 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.306 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.306 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.306 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.306 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.306 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.306 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.564 00:20:13.564 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.564 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.564 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.823 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.823 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.823 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.823 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.081 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.081 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.081 { 00:20:14.081 "cntlid": 25, 00:20:14.081 "qid": 0, 00:20:14.081 "state": "enabled", 00:20:14.081 "thread": "nvmf_tgt_poll_group_000", 00:20:14.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:14.081 "listen_address": { 00:20:14.081 "trtype": "TCP", 00:20:14.081 "adrfam": "IPv4", 00:20:14.081 "traddr": "10.0.0.2", 00:20:14.081 "trsvcid": "4420" 00:20:14.081 }, 00:20:14.081 "peer_address": { 00:20:14.081 "trtype": "TCP", 00:20:14.081 "adrfam": "IPv4", 00:20:14.081 "traddr": "10.0.0.1", 00:20:14.081 "trsvcid": "60462" 00:20:14.081 }, 00:20:14.081 "auth": { 00:20:14.081 "state": "completed", 00:20:14.081 "digest": "sha256", 00:20:14.081 "dhgroup": "ffdhe4096" 00:20:14.081 } 00:20:14.081 } 00:20:14.081 ]' 00:20:14.081 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.081 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.081 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.081 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:14.081 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.081 07:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.081 07:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.081 07:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.339 07:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:20:14.339 07:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:20:15.273 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.273 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.273 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.273 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.273 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.273 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.273 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:15.273 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:15.531 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:15.531 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.531 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:15.531 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:15.531 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:15.531 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.531 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.531 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.531 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.531 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.531 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.531 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.531 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.097 00:20:16.097 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.097 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.097 07:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.355 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.355 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.355 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.355 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.355 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.355 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.355 { 00:20:16.355 "cntlid": 27, 00:20:16.355 "qid": 0, 00:20:16.355 "state": "enabled", 00:20:16.355 "thread": "nvmf_tgt_poll_group_000", 00:20:16.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:16.355 "listen_address": { 00:20:16.355 "trtype": "TCP", 00:20:16.355 "adrfam": "IPv4", 00:20:16.355 "traddr": "10.0.0.2", 00:20:16.355 "trsvcid": "4420" 00:20:16.355 }, 00:20:16.355 "peer_address": { 00:20:16.355 "trtype": "TCP", 00:20:16.355 "adrfam": "IPv4", 00:20:16.355 "traddr": "10.0.0.1", 00:20:16.355 "trsvcid": "60488" 00:20:16.355 }, 00:20:16.355 "auth": { 00:20:16.355 "state": "completed", 00:20:16.355 "digest": "sha256", 00:20:16.355 "dhgroup": "ffdhe4096" 00:20:16.355 } 00:20:16.355 } 00:20:16.355 ]' 00:20:16.355 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.355 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.355 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.355 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:16.355 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.355 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.355 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.355 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.614 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:20:16.614 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:20:17.553 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.553 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.553 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.553 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.553 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.553 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.553 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:17.553 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:17.812 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:17.812 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.812 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.812 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:17.812 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:17.812 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.812 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.812 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.812 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.812 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.812 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.812 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.812 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.382 00:20:18.382 07:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.382 07:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.382 07:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.641 07:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.641 07:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.641 07:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.641 07:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.641 07:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.641 07:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.641 { 00:20:18.641 "cntlid": 29, 00:20:18.641 "qid": 0, 00:20:18.641 "state": "enabled", 00:20:18.641 "thread": "nvmf_tgt_poll_group_000", 00:20:18.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:18.641 "listen_address": { 00:20:18.641 "trtype": "TCP", 00:20:18.641 "adrfam": "IPv4", 00:20:18.641 "traddr": "10.0.0.2", 00:20:18.641 "trsvcid": "4420" 00:20:18.641 }, 00:20:18.641 "peer_address": { 00:20:18.641 "trtype": "TCP", 00:20:18.641 "adrfam": "IPv4", 00:20:18.641 "traddr": "10.0.0.1", 00:20:18.641 "trsvcid": "57006" 00:20:18.641 }, 00:20:18.641 "auth": { 00:20:18.641 "state": "completed", 00:20:18.641 "digest": "sha256", 00:20:18.641 "dhgroup": "ffdhe4096" 00:20:18.641 } 00:20:18.641 } 00:20:18.641 ]' 00:20:18.641 07:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.641 07:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.641 07:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.641 07:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:18.641 07:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.641 07:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.641 07:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.641 07:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.902 07:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:20:18.902 07:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:20:19.841 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.841 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.841 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.841 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.841 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.841 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.841 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:19.841 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:20.100 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:20.100 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.100 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:20.100 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:20.100 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:20.100 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.100 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:20.100 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.100 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.100 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.100 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:20.100 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.100 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.671 00:20:20.671 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.671 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.671 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.931 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.931 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.931 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.931 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.931 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.931 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.931 { 00:20:20.931 "cntlid": 31, 00:20:20.931 "qid": 0, 00:20:20.931 "state": "enabled", 00:20:20.931 "thread": "nvmf_tgt_poll_group_000", 00:20:20.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:20.931 "listen_address": { 00:20:20.931 "trtype": "TCP", 00:20:20.931 "adrfam": "IPv4", 00:20:20.931 "traddr": "10.0.0.2", 00:20:20.931 "trsvcid": "4420" 00:20:20.931 }, 00:20:20.931 "peer_address": { 00:20:20.931 "trtype": "TCP", 00:20:20.931 "adrfam": "IPv4", 00:20:20.931 "traddr": "10.0.0.1", 00:20:20.931 "trsvcid": "57044" 00:20:20.931 }, 00:20:20.931 "auth": { 00:20:20.931 "state": "completed", 00:20:20.931 "digest": "sha256", 00:20:20.931 "dhgroup": "ffdhe4096" 00:20:20.931 } 00:20:20.931 } 00:20:20.931 ]' 00:20:20.931 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.931 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.931 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.931 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:20.931 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.931 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.931 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.931 07:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.189 07:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:20:21.189 07:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:20:22.126 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.126 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.126 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.126 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.126 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.126 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.126 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.126 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:22.126 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:22.384 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:22.384 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.384 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:22.384 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:22.384 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:22.384 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.384 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.384 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.384 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.384 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.384 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.384 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.384 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.953 00:20:22.953 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.953 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.953 07:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.211 07:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.211 07:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.211 07:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.211 07:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.211 07:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.211 07:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.211 { 00:20:23.211 "cntlid": 33, 00:20:23.211 "qid": 0, 00:20:23.211 "state": "enabled", 00:20:23.211 "thread": "nvmf_tgt_poll_group_000", 00:20:23.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:23.211 "listen_address": { 00:20:23.211 "trtype": "TCP", 00:20:23.211 "adrfam": "IPv4", 00:20:23.211 "traddr": "10.0.0.2", 00:20:23.211 "trsvcid": "4420" 00:20:23.211 }, 00:20:23.211 "peer_address": { 00:20:23.211 "trtype": "TCP", 00:20:23.211 "adrfam": "IPv4", 00:20:23.211 "traddr": "10.0.0.1", 00:20:23.211 "trsvcid": "57084" 00:20:23.211 }, 00:20:23.211 "auth": { 00:20:23.211 "state": "completed", 00:20:23.211 "digest": "sha256", 00:20:23.211 "dhgroup": "ffdhe6144" 00:20:23.211 } 00:20:23.211 } 00:20:23.211 ]' 00:20:23.211 07:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.211 07:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.211 07:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.211 07:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:23.211 07:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.211 07:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.211 07:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.211 07:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.781 07:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:20:23.781 07:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:20:24.715 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.716 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.716 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.716 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.716 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.716 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.716 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:24.716 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:24.716 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:24.716 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.716 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:24.716 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:24.716 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:24.716 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.716 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.716 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.716 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.716 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.716 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.716 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.716 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.285 00:20:25.285 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.285 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.285 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.544 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.544 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.544 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.544 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.544 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.544 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.544 { 00:20:25.544 "cntlid": 35, 00:20:25.544 "qid": 0, 00:20:25.544 "state": "enabled", 00:20:25.544 "thread": "nvmf_tgt_poll_group_000", 00:20:25.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:25.544 "listen_address": { 00:20:25.544 "trtype": "TCP", 00:20:25.544 "adrfam": "IPv4", 00:20:25.544 "traddr": "10.0.0.2", 00:20:25.544 "trsvcid": "4420" 00:20:25.544 }, 00:20:25.544 "peer_address": { 00:20:25.544 "trtype": "TCP", 00:20:25.544 "adrfam": "IPv4", 00:20:25.544 "traddr": "10.0.0.1", 00:20:25.544 "trsvcid": "57120" 00:20:25.544 }, 00:20:25.544 "auth": { 00:20:25.544 "state": "completed", 00:20:25.544 "digest": "sha256", 00:20:25.544 "dhgroup": "ffdhe6144" 00:20:25.544 } 00:20:25.544 } 00:20:25.544 ]' 00:20:25.544 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.544 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.544 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.803 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:25.803 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.803 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.803 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.803 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.061 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:20:26.061 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:20:26.997 07:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.997 07:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.997 07:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.997 07:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.997 07:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.997 07:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.997 07:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:26.997 07:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:27.257 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:27.257 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.257 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:27.257 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:27.257 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:27.257 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.257 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.257 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.257 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.257 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.257 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.257 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.257 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.827 00:20:27.827 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.827 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.827 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.085 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.085 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.085 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.085 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.085 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.085 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.085 { 00:20:28.085 "cntlid": 37, 00:20:28.085 "qid": 0, 00:20:28.085 "state": "enabled", 00:20:28.085 "thread": "nvmf_tgt_poll_group_000", 00:20:28.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:28.085 "listen_address": { 00:20:28.085 "trtype": "TCP", 00:20:28.085 "adrfam": "IPv4", 00:20:28.085 "traddr": "10.0.0.2", 00:20:28.085 "trsvcid": "4420" 00:20:28.085 }, 00:20:28.085 "peer_address": { 00:20:28.085 "trtype": "TCP", 00:20:28.085 "adrfam": "IPv4", 00:20:28.085 "traddr": "10.0.0.1", 00:20:28.085 "trsvcid": "45506" 00:20:28.085 }, 00:20:28.085 "auth": { 00:20:28.085 "state": "completed", 00:20:28.085 "digest": "sha256", 00:20:28.085 "dhgroup": "ffdhe6144" 00:20:28.085 } 00:20:28.085 } 00:20:28.085 ]' 00:20:28.085 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.085 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.085 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.085 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:28.085 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.085 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.085 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.086 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.345 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:20:28.346 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:20:29.287 07:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.287 07:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.287 07:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.287 07:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.287 07:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.287 07:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.287 07:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:29.287 07:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:29.545 07:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:29.545 07:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.545 07:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:29.545 07:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:29.545 07:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:29.545 07:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.545 07:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:29.545 07:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.545 07:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.545 07:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.545 07:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:29.545 07:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.545 07:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.120 00:20:30.120 07:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.120 07:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.120 07:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.415 07:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.415 07:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.415 07:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.415 07:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.415 07:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.415 07:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.415 { 00:20:30.415 "cntlid": 39, 00:20:30.415 "qid": 0, 00:20:30.415 "state": "enabled", 00:20:30.415 "thread": "nvmf_tgt_poll_group_000", 00:20:30.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:30.415 "listen_address": { 00:20:30.415 "trtype": "TCP", 00:20:30.415 "adrfam": "IPv4", 00:20:30.415 "traddr": "10.0.0.2", 00:20:30.415 "trsvcid": "4420" 00:20:30.415 }, 00:20:30.415 "peer_address": { 00:20:30.415 "trtype": "TCP", 00:20:30.415 "adrfam": "IPv4", 00:20:30.415 "traddr": "10.0.0.1", 00:20:30.415 "trsvcid": "45538" 00:20:30.415 }, 00:20:30.415 "auth": { 00:20:30.415 "state": "completed", 00:20:30.415 "digest": "sha256", 00:20:30.415 "dhgroup": "ffdhe6144" 00:20:30.415 } 00:20:30.415 } 00:20:30.415 ]' 00:20:30.415 07:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.415 07:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.415 07:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.415 07:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:30.415 07:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.416 07:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.416 07:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.416 07:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.677 07:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:20:30.677 07:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:20:31.614 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.614 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.614 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.614 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.614 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.614 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.614 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.614 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:31.614 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:31.873 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:31.873 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.873 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:31.873 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:31.873 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:31.873 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.873 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.873 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.873 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.873 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.873 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.873 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.873 07:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.811 00:20:32.811 07:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.811 07:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.811 07:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.068 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.068 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.068 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.069 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.069 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.069 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.069 { 00:20:33.069 "cntlid": 41, 00:20:33.069 "qid": 0, 00:20:33.069 "state": "enabled", 00:20:33.069 "thread": "nvmf_tgt_poll_group_000", 00:20:33.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:33.069 "listen_address": { 00:20:33.069 "trtype": "TCP", 00:20:33.069 "adrfam": "IPv4", 00:20:33.069 "traddr": "10.0.0.2", 00:20:33.069 "trsvcid": "4420" 00:20:33.069 }, 00:20:33.069 "peer_address": { 00:20:33.069 "trtype": "TCP", 00:20:33.069 "adrfam": "IPv4", 00:20:33.069 "traddr": "10.0.0.1", 00:20:33.069 "trsvcid": "45568" 00:20:33.069 }, 00:20:33.069 "auth": { 00:20:33.069 "state": "completed", 00:20:33.069 "digest": "sha256", 00:20:33.069 "dhgroup": "ffdhe8192" 00:20:33.069 } 00:20:33.069 } 00:20:33.069 ]' 00:20:33.069 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.069 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:33.069 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.069 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:33.069 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.069 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.069 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.069 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.327 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:20:33.327 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:20:34.261 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.261 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.261 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.261 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.261 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.261 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.261 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:34.261 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:34.520 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:34.520 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.520 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:34.520 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:34.520 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:34.520 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.520 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.520 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.520 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.520 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.520 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.520 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.520 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.458 00:20:35.458 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.458 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.458 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.716 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.716 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.716 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.716 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.716 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.716 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.716 { 00:20:35.716 "cntlid": 43, 00:20:35.716 "qid": 0, 00:20:35.716 "state": "enabled", 00:20:35.716 "thread": "nvmf_tgt_poll_group_000", 00:20:35.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:35.716 "listen_address": { 00:20:35.716 "trtype": "TCP", 00:20:35.716 "adrfam": "IPv4", 00:20:35.716 "traddr": "10.0.0.2", 00:20:35.716 "trsvcid": "4420" 00:20:35.716 }, 00:20:35.716 "peer_address": { 00:20:35.716 "trtype": "TCP", 00:20:35.716 "adrfam": "IPv4", 00:20:35.716 "traddr": "10.0.0.1", 00:20:35.716 "trsvcid": "45592" 00:20:35.716 }, 00:20:35.716 "auth": { 00:20:35.716 "state": "completed", 00:20:35.716 "digest": "sha256", 00:20:35.716 "dhgroup": "ffdhe8192" 00:20:35.716 } 00:20:35.716 } 00:20:35.716 ]' 00:20:35.716 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.716 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.716 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.717 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:35.717 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.975 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.975 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.975 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.235 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:20:36.235 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:20:37.174 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.174 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.174 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.174 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.174 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.174 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.174 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:37.174 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:37.433 07:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:37.433 07:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.433 07:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:37.433 07:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:37.433 07:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:37.433 07:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.433 07:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.433 07:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.433 07:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.433 07:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.433 07:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.433 07:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.433 07:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.374 00:20:38.374 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.374 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.374 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.374 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.374 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.374 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.374 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.374 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.374 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.374 { 00:20:38.374 "cntlid": 45, 00:20:38.374 "qid": 0, 00:20:38.374 "state": "enabled", 00:20:38.374 "thread": "nvmf_tgt_poll_group_000", 00:20:38.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:38.374 "listen_address": { 00:20:38.374 "trtype": "TCP", 00:20:38.374 "adrfam": "IPv4", 00:20:38.374 "traddr": "10.0.0.2", 00:20:38.374 "trsvcid": "4420" 00:20:38.374 }, 00:20:38.374 "peer_address": { 00:20:38.374 "trtype": "TCP", 00:20:38.374 "adrfam": "IPv4", 00:20:38.374 "traddr": "10.0.0.1", 00:20:38.374 "trsvcid": "47392" 00:20:38.374 }, 00:20:38.374 "auth": { 00:20:38.374 "state": "completed", 00:20:38.374 "digest": "sha256", 00:20:38.374 "dhgroup": "ffdhe8192" 00:20:38.374 } 00:20:38.374 } 00:20:38.374 ]' 00:20:38.374 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.632 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:38.632 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.632 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:38.632 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.632 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.632 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.632 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.890 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:20:38.890 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:20:39.827 07:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.827 07:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.827 07:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.827 07:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.827 07:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.827 07:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.827 07:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:39.827 07:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:40.086 07:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:40.086 07:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.086 07:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:40.086 07:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:40.086 07:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:40.086 07:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.086 07:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:40.086 07:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.086 07:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.086 07:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.086 07:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:40.086 07:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:40.086 07:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.025 00:20:41.025 07:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.025 07:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.025 07:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.025 07:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.025 07:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.025 07:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.025 07:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.025 07:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.025 07:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.025 { 00:20:41.025 "cntlid": 47, 00:20:41.025 "qid": 0, 00:20:41.025 "state": "enabled", 00:20:41.025 "thread": "nvmf_tgt_poll_group_000", 00:20:41.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:41.025 "listen_address": { 00:20:41.025 "trtype": "TCP", 00:20:41.025 "adrfam": "IPv4", 00:20:41.025 "traddr": "10.0.0.2", 00:20:41.025 "trsvcid": "4420" 00:20:41.025 }, 00:20:41.025 "peer_address": { 00:20:41.025 "trtype": "TCP", 00:20:41.025 "adrfam": "IPv4", 00:20:41.025 "traddr": "10.0.0.1", 00:20:41.025 "trsvcid": "47408" 00:20:41.025 }, 00:20:41.025 "auth": { 00:20:41.025 "state": "completed", 00:20:41.025 "digest": "sha256", 00:20:41.025 "dhgroup": "ffdhe8192" 00:20:41.025 } 00:20:41.025 } 00:20:41.025 ]' 00:20:41.025 07:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.284 07:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:41.284 07:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.284 07:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:41.284 07:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.284 07:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.284 07:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.284 07:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.543 07:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:20:41.543 07:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:20:42.479 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.479 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.479 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.479 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.479 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.479 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:42.479 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:42.479 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.479 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.479 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.738 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:42.738 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.738 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.738 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:42.738 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:42.738 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.738 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.738 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.738 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.738 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.738 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.738 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.738 07:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.306 00:20:43.306 07:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.306 07:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.306 07:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.565 07:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.565 07:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.565 07:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.565 07:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.565 07:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.565 07:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.565 { 00:20:43.565 "cntlid": 49, 00:20:43.565 "qid": 0, 00:20:43.565 "state": "enabled", 00:20:43.565 "thread": "nvmf_tgt_poll_group_000", 00:20:43.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:43.565 "listen_address": { 00:20:43.565 "trtype": "TCP", 00:20:43.565 "adrfam": "IPv4", 00:20:43.565 "traddr": "10.0.0.2", 00:20:43.565 "trsvcid": "4420" 00:20:43.565 }, 00:20:43.565 "peer_address": { 00:20:43.565 "trtype": "TCP", 00:20:43.565 "adrfam": "IPv4", 00:20:43.565 "traddr": "10.0.0.1", 00:20:43.565 "trsvcid": "47436" 00:20:43.565 }, 00:20:43.565 "auth": { 00:20:43.565 "state": "completed", 00:20:43.565 "digest": "sha384", 00:20:43.565 "dhgroup": "null" 00:20:43.565 } 00:20:43.565 } 00:20:43.565 ]' 00:20:43.565 07:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.565 07:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.565 07:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.565 07:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:43.565 07:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.565 07:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.565 07:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.565 07:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.132 07:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:20:44.132 07:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:20:45.069 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.069 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.069 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.069 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.069 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.069 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.069 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:45.069 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:45.069 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:45.069 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.069 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.069 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:45.069 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:45.069 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.069 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.069 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.069 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.069 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.069 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.327 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.327 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.586 00:20:45.586 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.586 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.586 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.844 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.844 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.844 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.844 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.844 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.844 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.844 { 00:20:45.844 "cntlid": 51, 00:20:45.844 "qid": 0, 00:20:45.844 "state": "enabled", 00:20:45.844 "thread": "nvmf_tgt_poll_group_000", 00:20:45.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:45.844 "listen_address": { 00:20:45.844 "trtype": "TCP", 00:20:45.844 "adrfam": "IPv4", 00:20:45.844 "traddr": "10.0.0.2", 00:20:45.844 "trsvcid": "4420" 00:20:45.844 }, 00:20:45.844 "peer_address": { 00:20:45.844 "trtype": "TCP", 00:20:45.844 "adrfam": "IPv4", 00:20:45.844 "traddr": "10.0.0.1", 00:20:45.844 "trsvcid": "47470" 00:20:45.844 }, 00:20:45.844 "auth": { 00:20:45.844 "state": "completed", 00:20:45.844 "digest": "sha384", 00:20:45.844 "dhgroup": "null" 00:20:45.844 } 00:20:45.844 } 00:20:45.844 ]' 00:20:45.844 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.844 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.844 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.844 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:45.844 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.844 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.844 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.844 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.411 07:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:20:46.411 07:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:20:47.349 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.349 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.349 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.349 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.349 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.349 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.349 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:47.349 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:47.608 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:47.608 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.608 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.608 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:47.608 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:47.608 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.608 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.608 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.608 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.608 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.608 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.608 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.608 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.867 00:20:47.867 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.867 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.867 07:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.126 07:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.126 07:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.126 07:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.126 07:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.126 07:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.126 07:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.126 { 00:20:48.126 "cntlid": 53, 00:20:48.126 "qid": 0, 00:20:48.126 "state": "enabled", 00:20:48.126 "thread": "nvmf_tgt_poll_group_000", 00:20:48.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:48.126 "listen_address": { 00:20:48.126 "trtype": "TCP", 00:20:48.126 "adrfam": "IPv4", 00:20:48.126 "traddr": "10.0.0.2", 00:20:48.126 "trsvcid": "4420" 00:20:48.126 }, 00:20:48.126 "peer_address": { 00:20:48.126 "trtype": "TCP", 00:20:48.126 "adrfam": "IPv4", 00:20:48.126 "traddr": "10.0.0.1", 00:20:48.126 "trsvcid": "46864" 00:20:48.126 }, 00:20:48.126 "auth": { 00:20:48.126 "state": "completed", 00:20:48.126 "digest": "sha384", 00:20:48.126 "dhgroup": "null" 00:20:48.126 } 00:20:48.126 } 00:20:48.126 ]' 00:20:48.126 07:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.126 07:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.126 07:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.385 07:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:48.385 07:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.385 07:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.385 07:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.385 07:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.644 07:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:20:48.644 07:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:20:49.581 07:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.581 07:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.581 07:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.581 07:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.581 07:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.581 07:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.581 07:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:49.581 07:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:49.839 07:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:49.839 07:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.839 07:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.839 07:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:49.839 07:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:49.839 07:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.839 07:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:49.839 07:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.839 07:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.839 07:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.839 07:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:49.839 07:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:49.839 07:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.098 00:20:50.098 07:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.098 07:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.098 07:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.356 07:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.356 07:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.356 07:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.356 07:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.356 07:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.356 07:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.356 { 00:20:50.356 "cntlid": 55, 00:20:50.356 "qid": 0, 00:20:50.356 "state": "enabled", 00:20:50.356 "thread": "nvmf_tgt_poll_group_000", 00:20:50.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:50.356 "listen_address": { 00:20:50.356 "trtype": "TCP", 00:20:50.356 "adrfam": "IPv4", 00:20:50.356 "traddr": "10.0.0.2", 00:20:50.356 "trsvcid": "4420" 00:20:50.356 }, 00:20:50.356 "peer_address": { 00:20:50.356 "trtype": "TCP", 00:20:50.356 "adrfam": "IPv4", 00:20:50.356 "traddr": "10.0.0.1", 00:20:50.356 "trsvcid": "46898" 00:20:50.356 }, 00:20:50.356 "auth": { 00:20:50.356 "state": "completed", 00:20:50.356 "digest": "sha384", 00:20:50.356 "dhgroup": "null" 00:20:50.356 } 00:20:50.356 } 00:20:50.356 ]' 00:20:50.356 07:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.615 07:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.615 07:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.615 07:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:50.615 07:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.615 07:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.615 07:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.615 07:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.873 07:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:20:50.873 07:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:20:51.808 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.808 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.808 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.808 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.808 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.808 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.808 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.808 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.808 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:52.066 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:52.066 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.066 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.066 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:52.066 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:52.066 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.066 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.066 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.066 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.066 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.066 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.066 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.066 07:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.325 00:20:52.325 07:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.325 07:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.325 07:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.583 07:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.583 07:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.583 07:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.583 07:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.583 07:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.583 07:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.583 { 00:20:52.583 "cntlid": 57, 00:20:52.583 "qid": 0, 00:20:52.583 "state": "enabled", 00:20:52.583 "thread": "nvmf_tgt_poll_group_000", 00:20:52.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:52.583 "listen_address": { 00:20:52.583 "trtype": "TCP", 00:20:52.583 "adrfam": "IPv4", 00:20:52.583 "traddr": "10.0.0.2", 00:20:52.583 "trsvcid": "4420" 00:20:52.583 }, 00:20:52.583 "peer_address": { 00:20:52.583 "trtype": "TCP", 00:20:52.583 "adrfam": "IPv4", 00:20:52.583 "traddr": "10.0.0.1", 00:20:52.583 "trsvcid": "46926" 00:20:52.583 }, 00:20:52.583 "auth": { 00:20:52.583 "state": "completed", 00:20:52.583 "digest": "sha384", 00:20:52.583 "dhgroup": "ffdhe2048" 00:20:52.583 } 00:20:52.583 } 00:20:52.583 ]' 00:20:52.583 07:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.583 07:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.583 07:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.583 07:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:52.583 07:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.842 07:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.842 07:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.842 07:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.102 07:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:20:53.102 07:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:20:54.038 07:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.038 07:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.038 07:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.038 07:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.038 07:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.038 07:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.038 07:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:54.038 07:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:54.038 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:54.038 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.038 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.038 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:54.038 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:54.038 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.038 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.038 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.038 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.038 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.038 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.038 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.038 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.605 00:20:54.605 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.605 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.605 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.862 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.862 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.862 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.863 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.863 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.863 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.863 { 00:20:54.863 "cntlid": 59, 00:20:54.863 "qid": 0, 00:20:54.863 "state": "enabled", 00:20:54.863 "thread": "nvmf_tgt_poll_group_000", 00:20:54.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:54.863 "listen_address": { 00:20:54.863 "trtype": "TCP", 00:20:54.863 "adrfam": "IPv4", 00:20:54.863 "traddr": "10.0.0.2", 00:20:54.863 "trsvcid": "4420" 00:20:54.863 }, 00:20:54.863 "peer_address": { 00:20:54.863 "trtype": "TCP", 00:20:54.863 "adrfam": "IPv4", 00:20:54.863 "traddr": "10.0.0.1", 00:20:54.863 "trsvcid": "46952" 00:20:54.863 }, 00:20:54.863 "auth": { 00:20:54.863 "state": "completed", 00:20:54.863 "digest": "sha384", 00:20:54.863 "dhgroup": "ffdhe2048" 00:20:54.863 } 00:20:54.863 } 00:20:54.863 ]' 00:20:54.863 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.863 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.863 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.863 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:54.863 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.863 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.863 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.863 07:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.158 07:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:20:55.158 07:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:20:56.127 07:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.127 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.127 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.127 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.127 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.127 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.127 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:56.127 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:56.385 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:56.385 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.385 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:56.385 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:56.385 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:56.385 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.385 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.385 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.385 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.385 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.385 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.385 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.385 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.643 00:20:56.643 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.643 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.643 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.901 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.901 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.901 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.901 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.901 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.901 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.901 { 00:20:56.901 "cntlid": 61, 00:20:56.901 "qid": 0, 00:20:56.901 "state": "enabled", 00:20:56.901 "thread": "nvmf_tgt_poll_group_000", 00:20:56.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:56.901 "listen_address": { 00:20:56.901 "trtype": "TCP", 00:20:56.901 "adrfam": "IPv4", 00:20:56.901 "traddr": "10.0.0.2", 00:20:56.901 "trsvcid": "4420" 00:20:56.901 }, 00:20:56.901 "peer_address": { 00:20:56.901 "trtype": "TCP", 00:20:56.901 "adrfam": "IPv4", 00:20:56.901 "traddr": "10.0.0.1", 00:20:56.901 "trsvcid": "46982" 00:20:56.901 }, 00:20:56.901 "auth": { 00:20:56.901 "state": "completed", 00:20:56.901 "digest": "sha384", 00:20:56.901 "dhgroup": "ffdhe2048" 00:20:56.901 } 00:20:56.901 } 00:20:56.901 ]' 00:20:56.901 07:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.159 07:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.159 07:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.159 07:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:57.159 07:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.159 07:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.159 07:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.159 07:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.417 07:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:20:57.417 07:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:20:58.355 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.355 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.355 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.355 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.355 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.355 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.355 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.355 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.614 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:58.614 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.614 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:58.614 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:58.614 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:58.614 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.614 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:58.614 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.614 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.614 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.614 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:58.614 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.614 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.873 00:20:58.873 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.873 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.873 07:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.439 07:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.439 07:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.439 07:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.439 07:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.439 07:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.439 07:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.439 { 00:20:59.439 "cntlid": 63, 00:20:59.439 "qid": 0, 00:20:59.439 "state": "enabled", 00:20:59.439 "thread": "nvmf_tgt_poll_group_000", 00:20:59.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:59.439 "listen_address": { 00:20:59.439 "trtype": "TCP", 00:20:59.439 "adrfam": "IPv4", 00:20:59.439 "traddr": "10.0.0.2", 00:20:59.439 "trsvcid": "4420" 00:20:59.439 }, 00:20:59.439 "peer_address": { 00:20:59.439 "trtype": "TCP", 00:20:59.439 "adrfam": "IPv4", 00:20:59.439 "traddr": "10.0.0.1", 00:20:59.439 "trsvcid": "59882" 00:20:59.439 }, 00:20:59.439 "auth": { 00:20:59.439 "state": "completed", 00:20:59.439 "digest": "sha384", 00:20:59.439 "dhgroup": "ffdhe2048" 00:20:59.439 } 00:20:59.439 } 00:20:59.439 ]' 00:20:59.439 07:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.439 07:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.439 07:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.439 07:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:59.439 07:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.439 07:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.439 07:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.439 07:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.699 07:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:20:59.699 07:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:21:00.634 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.634 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.634 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.634 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.634 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.634 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.634 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.634 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:00.634 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:00.893 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:00.893 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.893 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:00.893 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:00.893 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:00.893 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.893 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.893 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.893 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.893 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.893 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.893 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.893 07:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.151 00:21:01.151 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.151 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.151 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.409 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.409 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.409 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.409 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.409 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.409 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.409 { 00:21:01.409 "cntlid": 65, 00:21:01.409 "qid": 0, 00:21:01.409 "state": "enabled", 00:21:01.409 "thread": "nvmf_tgt_poll_group_000", 00:21:01.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:01.409 "listen_address": { 00:21:01.409 "trtype": "TCP", 00:21:01.409 "adrfam": "IPv4", 00:21:01.409 "traddr": "10.0.0.2", 00:21:01.409 "trsvcid": "4420" 00:21:01.409 }, 00:21:01.409 "peer_address": { 00:21:01.409 "trtype": "TCP", 00:21:01.409 "adrfam": "IPv4", 00:21:01.409 "traddr": "10.0.0.1", 00:21:01.409 "trsvcid": "59916" 00:21:01.409 }, 00:21:01.409 "auth": { 00:21:01.409 "state": "completed", 00:21:01.409 "digest": "sha384", 00:21:01.409 "dhgroup": "ffdhe3072" 00:21:01.409 } 00:21:01.409 } 00:21:01.409 ]' 00:21:01.409 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.667 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.667 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.667 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:01.667 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.667 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.667 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.667 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.925 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:21:01.925 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:21:02.863 07:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.863 07:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.863 07:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.863 07:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.863 07:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.863 07:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.863 07:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:02.863 07:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:03.122 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:03.122 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.122 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.122 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:03.122 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:03.122 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.122 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.122 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.122 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.122 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.122 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.122 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.122 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.381 00:21:03.639 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.639 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.639 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.897 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.897 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.897 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.897 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.897 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.897 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.897 { 00:21:03.897 "cntlid": 67, 00:21:03.897 "qid": 0, 00:21:03.897 "state": "enabled", 00:21:03.897 "thread": "nvmf_tgt_poll_group_000", 00:21:03.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:03.897 "listen_address": { 00:21:03.897 "trtype": "TCP", 00:21:03.897 "adrfam": "IPv4", 00:21:03.897 "traddr": "10.0.0.2", 00:21:03.897 "trsvcid": "4420" 00:21:03.897 }, 00:21:03.897 "peer_address": { 00:21:03.897 "trtype": "TCP", 00:21:03.897 "adrfam": "IPv4", 00:21:03.897 "traddr": "10.0.0.1", 00:21:03.897 "trsvcid": "59936" 00:21:03.897 }, 00:21:03.897 "auth": { 00:21:03.897 "state": "completed", 00:21:03.897 "digest": "sha384", 00:21:03.897 "dhgroup": "ffdhe3072" 00:21:03.897 } 00:21:03.897 } 00:21:03.897 ]' 00:21:03.897 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.897 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.897 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.897 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:03.897 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.897 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.897 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.897 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.155 07:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:21:04.155 07:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:21:05.092 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.092 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.092 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.092 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.092 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.092 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.092 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:05.092 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:05.350 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:05.350 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.350 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:05.350 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:05.350 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:05.350 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.350 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.351 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.351 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.351 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.351 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.351 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.351 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.609 00:21:05.866 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.866 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.866 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.125 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.125 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.125 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.125 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.125 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.125 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.125 { 00:21:06.125 "cntlid": 69, 00:21:06.125 "qid": 0, 00:21:06.125 "state": "enabled", 00:21:06.125 "thread": "nvmf_tgt_poll_group_000", 00:21:06.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:06.125 "listen_address": { 00:21:06.125 "trtype": "TCP", 00:21:06.125 "adrfam": "IPv4", 00:21:06.125 "traddr": "10.0.0.2", 00:21:06.125 "trsvcid": "4420" 00:21:06.125 }, 00:21:06.125 "peer_address": { 00:21:06.125 "trtype": "TCP", 00:21:06.125 "adrfam": "IPv4", 00:21:06.125 "traddr": "10.0.0.1", 00:21:06.125 "trsvcid": "59970" 00:21:06.125 }, 00:21:06.125 "auth": { 00:21:06.125 "state": "completed", 00:21:06.125 "digest": "sha384", 00:21:06.125 "dhgroup": "ffdhe3072" 00:21:06.125 } 00:21:06.125 } 00:21:06.125 ]' 00:21:06.125 07:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.125 07:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.125 07:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.125 07:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:06.125 07:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.125 07:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.125 07:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.125 07:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.384 07:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:21:06.384 07:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:21:07.321 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.321 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.321 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.321 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.321 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.321 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.321 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:07.321 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:07.579 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:07.579 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.579 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:07.579 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:07.579 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:07.579 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.579 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:07.579 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.579 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.579 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.579 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:07.580 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.580 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:08.146 00:21:08.146 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.146 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.146 07:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.405 07:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.405 07:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.405 07:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.405 07:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.405 07:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.405 07:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.405 { 00:21:08.405 "cntlid": 71, 00:21:08.405 "qid": 0, 00:21:08.405 "state": "enabled", 00:21:08.405 "thread": "nvmf_tgt_poll_group_000", 00:21:08.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:08.405 "listen_address": { 00:21:08.405 "trtype": "TCP", 00:21:08.405 "adrfam": "IPv4", 00:21:08.405 "traddr": "10.0.0.2", 00:21:08.405 "trsvcid": "4420" 00:21:08.405 }, 00:21:08.405 "peer_address": { 00:21:08.405 "trtype": "TCP", 00:21:08.405 "adrfam": "IPv4", 00:21:08.405 "traddr": "10.0.0.1", 00:21:08.405 "trsvcid": "44136" 00:21:08.405 }, 00:21:08.405 "auth": { 00:21:08.405 "state": "completed", 00:21:08.405 "digest": "sha384", 00:21:08.405 "dhgroup": "ffdhe3072" 00:21:08.405 } 00:21:08.405 } 00:21:08.405 ]' 00:21:08.405 07:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.405 07:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.405 07:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.405 07:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:08.405 07:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.405 07:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.405 07:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.405 07:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.663 07:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:21:08.663 07:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:21:09.601 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.601 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.601 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.601 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.601 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.601 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.601 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.601 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:09.601 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:09.859 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:09.859 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.859 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:09.859 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:09.859 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:09.859 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.859 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.859 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.859 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.859 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.859 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.860 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.860 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.118 00:21:10.118 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.118 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.118 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.687 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.687 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.687 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.687 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.687 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.687 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.687 { 00:21:10.687 "cntlid": 73, 00:21:10.687 "qid": 0, 00:21:10.687 "state": "enabled", 00:21:10.687 "thread": "nvmf_tgt_poll_group_000", 00:21:10.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:10.687 "listen_address": { 00:21:10.687 "trtype": "TCP", 00:21:10.687 "adrfam": "IPv4", 00:21:10.687 "traddr": "10.0.0.2", 00:21:10.687 "trsvcid": "4420" 00:21:10.687 }, 00:21:10.687 "peer_address": { 00:21:10.687 "trtype": "TCP", 00:21:10.687 "adrfam": "IPv4", 00:21:10.687 "traddr": "10.0.0.1", 00:21:10.687 "trsvcid": "44154" 00:21:10.687 }, 00:21:10.687 "auth": { 00:21:10.687 "state": "completed", 00:21:10.687 "digest": "sha384", 00:21:10.687 "dhgroup": "ffdhe4096" 00:21:10.687 } 00:21:10.687 } 00:21:10.687 ]' 00:21:10.687 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.687 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.687 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.687 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:10.687 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.687 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.687 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.688 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.946 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:21:10.946 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:21:11.883 07:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.883 07:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.883 07:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.883 07:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.883 07:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.883 07:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.883 07:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:11.883 07:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:12.141 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:12.141 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.141 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:12.141 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:12.141 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:12.141 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.141 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.141 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.141 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.141 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.141 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.141 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.141 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.710 00:21:12.710 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.710 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.710 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.710 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.710 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.710 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.710 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.969 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.969 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.969 { 00:21:12.969 "cntlid": 75, 00:21:12.969 "qid": 0, 00:21:12.969 "state": "enabled", 00:21:12.969 "thread": "nvmf_tgt_poll_group_000", 00:21:12.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:12.969 "listen_address": { 00:21:12.969 "trtype": "TCP", 00:21:12.969 "adrfam": "IPv4", 00:21:12.969 "traddr": "10.0.0.2", 00:21:12.969 "trsvcid": "4420" 00:21:12.969 }, 00:21:12.969 "peer_address": { 00:21:12.969 "trtype": "TCP", 00:21:12.969 "adrfam": "IPv4", 00:21:12.969 "traddr": "10.0.0.1", 00:21:12.969 "trsvcid": "44188" 00:21:12.969 }, 00:21:12.969 "auth": { 00:21:12.969 "state": "completed", 00:21:12.969 "digest": "sha384", 00:21:12.969 "dhgroup": "ffdhe4096" 00:21:12.969 } 00:21:12.969 } 00:21:12.969 ]' 00:21:12.969 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.969 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.969 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.969 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:12.969 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.969 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.969 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.969 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.227 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:21:13.227 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:21:14.162 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.162 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.162 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.162 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.162 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.162 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.162 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:14.162 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:14.421 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:14.421 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.421 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:14.421 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:14.421 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:14.421 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.421 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.421 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.421 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.421 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.421 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.421 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.421 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.989 00:21:14.989 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.989 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.989 07:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.248 07:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.248 07:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.248 07:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.248 07:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.248 07:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.248 07:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.248 { 00:21:15.248 "cntlid": 77, 00:21:15.248 "qid": 0, 00:21:15.248 "state": "enabled", 00:21:15.248 "thread": "nvmf_tgt_poll_group_000", 00:21:15.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:15.248 "listen_address": { 00:21:15.248 "trtype": "TCP", 00:21:15.248 "adrfam": "IPv4", 00:21:15.248 "traddr": "10.0.0.2", 00:21:15.248 "trsvcid": "4420" 00:21:15.248 }, 00:21:15.248 "peer_address": { 00:21:15.248 "trtype": "TCP", 00:21:15.248 "adrfam": "IPv4", 00:21:15.248 "traddr": "10.0.0.1", 00:21:15.248 "trsvcid": "44208" 00:21:15.248 }, 00:21:15.248 "auth": { 00:21:15.248 "state": "completed", 00:21:15.248 "digest": "sha384", 00:21:15.248 "dhgroup": "ffdhe4096" 00:21:15.248 } 00:21:15.248 } 00:21:15.248 ]' 00:21:15.248 07:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.248 07:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.248 07:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.248 07:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:15.248 07:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.248 07:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.248 07:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.248 07:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.506 07:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:21:15.506 07:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:21:16.444 07:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.444 07:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.444 07:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.444 07:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.444 07:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.444 07:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.444 07:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:16.444 07:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:16.702 07:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:16.702 07:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.702 07:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:16.702 07:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:16.702 07:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:16.702 07:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.702 07:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:16.702 07:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.702 07:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.702 07:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.702 07:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:16.702 07:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.702 07:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.270 00:21:17.270 07:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.270 07:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.270 07:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.270 07:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.270 07:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.270 07:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.270 07:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.270 07:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.529 07:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.529 { 00:21:17.529 "cntlid": 79, 00:21:17.529 "qid": 0, 00:21:17.529 "state": "enabled", 00:21:17.529 "thread": "nvmf_tgt_poll_group_000", 00:21:17.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:17.529 "listen_address": { 00:21:17.529 "trtype": "TCP", 00:21:17.529 "adrfam": "IPv4", 00:21:17.529 "traddr": "10.0.0.2", 00:21:17.529 "trsvcid": "4420" 00:21:17.529 }, 00:21:17.529 "peer_address": { 00:21:17.529 "trtype": "TCP", 00:21:17.529 "adrfam": "IPv4", 00:21:17.529 "traddr": "10.0.0.1", 00:21:17.529 "trsvcid": "39182" 00:21:17.529 }, 00:21:17.529 "auth": { 00:21:17.529 "state": "completed", 00:21:17.529 "digest": "sha384", 00:21:17.529 "dhgroup": "ffdhe4096" 00:21:17.529 } 00:21:17.529 } 00:21:17.529 ]' 00:21:17.529 07:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.529 07:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.529 07:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.529 07:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:17.529 07:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.529 07:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.529 07:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.529 07:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.788 07:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:21:17.788 07:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:21:18.726 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.726 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.726 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.726 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.726 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.726 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:18.726 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.726 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.726 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.984 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:18.984 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.984 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:18.984 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:18.984 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:18.984 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.984 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.984 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.984 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.984 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.984 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.984 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.984 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.551 00:21:19.551 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.551 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.551 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.809 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.809 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.809 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.809 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.809 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.809 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.809 { 00:21:19.809 "cntlid": 81, 00:21:19.809 "qid": 0, 00:21:19.809 "state": "enabled", 00:21:19.809 "thread": "nvmf_tgt_poll_group_000", 00:21:19.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:19.809 "listen_address": { 00:21:19.809 "trtype": "TCP", 00:21:19.809 "adrfam": "IPv4", 00:21:19.809 "traddr": "10.0.0.2", 00:21:19.809 "trsvcid": "4420" 00:21:19.809 }, 00:21:19.809 "peer_address": { 00:21:19.809 "trtype": "TCP", 00:21:19.809 "adrfam": "IPv4", 00:21:19.809 "traddr": "10.0.0.1", 00:21:19.809 "trsvcid": "39212" 00:21:19.809 }, 00:21:19.809 "auth": { 00:21:19.809 "state": "completed", 00:21:19.809 "digest": "sha384", 00:21:19.809 "dhgroup": "ffdhe6144" 00:21:19.809 } 00:21:19.809 } 00:21:19.809 ]' 00:21:19.809 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.809 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.809 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.809 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:19.809 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.809 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.809 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.809 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.068 07:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:21:20.068 07:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:21:21.030 07:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.030 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.030 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.030 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.030 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.030 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.030 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:21.030 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:21.288 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:21.288 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.288 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:21.288 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:21.288 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:21.288 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.288 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.288 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.288 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.288 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.288 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.288 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.289 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.853 00:21:21.853 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.853 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.853 07:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.111 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.111 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.111 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.111 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.111 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.111 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.111 { 00:21:22.111 "cntlid": 83, 00:21:22.111 "qid": 0, 00:21:22.111 "state": "enabled", 00:21:22.111 "thread": "nvmf_tgt_poll_group_000", 00:21:22.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:22.111 "listen_address": { 00:21:22.111 "trtype": "TCP", 00:21:22.111 "adrfam": "IPv4", 00:21:22.111 "traddr": "10.0.0.2", 00:21:22.111 "trsvcid": "4420" 00:21:22.111 }, 00:21:22.111 "peer_address": { 00:21:22.111 "trtype": "TCP", 00:21:22.111 "adrfam": "IPv4", 00:21:22.111 "traddr": "10.0.0.1", 00:21:22.111 "trsvcid": "39240" 00:21:22.111 }, 00:21:22.111 "auth": { 00:21:22.111 "state": "completed", 00:21:22.111 "digest": "sha384", 00:21:22.111 "dhgroup": "ffdhe6144" 00:21:22.111 } 00:21:22.111 } 00:21:22.111 ]' 00:21:22.111 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.369 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.369 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.369 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:22.369 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.369 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.369 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.369 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.627 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:21:22.627 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:21:23.567 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.567 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.567 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.567 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.567 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.567 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.567 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:23.567 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:23.826 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:23.826 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.826 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:23.826 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:23.826 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:23.826 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.826 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.826 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.826 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.826 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.826 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.826 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.826 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.395 00:21:24.395 07:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.395 07:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.395 07:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.653 07:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.653 07:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.653 07:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.653 07:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.653 07:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.653 07:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.653 { 00:21:24.653 "cntlid": 85, 00:21:24.653 "qid": 0, 00:21:24.653 "state": "enabled", 00:21:24.653 "thread": "nvmf_tgt_poll_group_000", 00:21:24.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:24.653 "listen_address": { 00:21:24.653 "trtype": "TCP", 00:21:24.653 "adrfam": "IPv4", 00:21:24.653 "traddr": "10.0.0.2", 00:21:24.653 "trsvcid": "4420" 00:21:24.653 }, 00:21:24.653 "peer_address": { 00:21:24.653 "trtype": "TCP", 00:21:24.653 "adrfam": "IPv4", 00:21:24.653 "traddr": "10.0.0.1", 00:21:24.653 "trsvcid": "39266" 00:21:24.653 }, 00:21:24.653 "auth": { 00:21:24.653 "state": "completed", 00:21:24.653 "digest": "sha384", 00:21:24.653 "dhgroup": "ffdhe6144" 00:21:24.653 } 00:21:24.653 } 00:21:24.653 ]' 00:21:24.653 07:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.653 07:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.653 07:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.653 07:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.653 07:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.653 07:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.653 07:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.653 07:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.221 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:21:25.221 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:21:25.787 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.787 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.787 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.787 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.787 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.787 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.787 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:25.787 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:26.356 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:26.356 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.356 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:26.356 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:26.356 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:26.356 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.356 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:26.356 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.356 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.356 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.356 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:26.356 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.356 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.614 00:21:26.614 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.614 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.614 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.874 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.874 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.132 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.132 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.132 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.132 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.132 { 00:21:27.132 "cntlid": 87, 00:21:27.132 "qid": 0, 00:21:27.132 "state": "enabled", 00:21:27.132 "thread": "nvmf_tgt_poll_group_000", 00:21:27.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:27.132 "listen_address": { 00:21:27.132 "trtype": "TCP", 00:21:27.132 "adrfam": "IPv4", 00:21:27.132 "traddr": "10.0.0.2", 00:21:27.132 "trsvcid": "4420" 00:21:27.132 }, 00:21:27.132 "peer_address": { 00:21:27.132 "trtype": "TCP", 00:21:27.132 "adrfam": "IPv4", 00:21:27.132 "traddr": "10.0.0.1", 00:21:27.132 "trsvcid": "39294" 00:21:27.132 }, 00:21:27.132 "auth": { 00:21:27.132 "state": "completed", 00:21:27.132 "digest": "sha384", 00:21:27.132 "dhgroup": "ffdhe6144" 00:21:27.132 } 00:21:27.132 } 00:21:27.132 ]' 00:21:27.132 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.132 07:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.132 07:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.132 07:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:27.132 07:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.132 07:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.132 07:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.132 07:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.390 07:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:21:27.390 07:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:21:28.330 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.330 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.330 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.330 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.330 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.330 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.330 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.330 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.330 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.589 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:28.589 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.589 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:28.589 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:28.589 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:28.589 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.589 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.589 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.589 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.589 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.589 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.589 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.589 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.528 00:21:29.528 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.528 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.528 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.787 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.787 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.787 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.787 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.787 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.787 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.787 { 00:21:29.787 "cntlid": 89, 00:21:29.787 "qid": 0, 00:21:29.787 "state": "enabled", 00:21:29.787 "thread": "nvmf_tgt_poll_group_000", 00:21:29.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:29.787 "listen_address": { 00:21:29.787 "trtype": "TCP", 00:21:29.787 "adrfam": "IPv4", 00:21:29.787 "traddr": "10.0.0.2", 00:21:29.787 "trsvcid": "4420" 00:21:29.787 }, 00:21:29.787 "peer_address": { 00:21:29.787 "trtype": "TCP", 00:21:29.787 "adrfam": "IPv4", 00:21:29.787 "traddr": "10.0.0.1", 00:21:29.787 "trsvcid": "47908" 00:21:29.787 }, 00:21:29.787 "auth": { 00:21:29.787 "state": "completed", 00:21:29.787 "digest": "sha384", 00:21:29.787 "dhgroup": "ffdhe8192" 00:21:29.787 } 00:21:29.787 } 00:21:29.787 ]' 00:21:29.787 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.046 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:30.046 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.046 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:30.046 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.046 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.046 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.046 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.304 07:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:21:30.304 07:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:21:31.243 07:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.243 07:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.243 07:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.243 07:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.243 07:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.243 07:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.243 07:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:31.243 07:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:31.502 07:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:31.502 07:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.502 07:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:31.502 07:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:31.502 07:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:31.502 07:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.502 07:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.502 07:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.503 07:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.503 07:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.503 07:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.503 07:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.503 07:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.442 00:21:32.442 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.442 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.442 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.700 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.700 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.700 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.700 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.700 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.700 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.700 { 00:21:32.700 "cntlid": 91, 00:21:32.700 "qid": 0, 00:21:32.700 "state": "enabled", 00:21:32.700 "thread": "nvmf_tgt_poll_group_000", 00:21:32.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:32.700 "listen_address": { 00:21:32.700 "trtype": "TCP", 00:21:32.700 "adrfam": "IPv4", 00:21:32.700 "traddr": "10.0.0.2", 00:21:32.700 "trsvcid": "4420" 00:21:32.700 }, 00:21:32.700 "peer_address": { 00:21:32.700 "trtype": "TCP", 00:21:32.700 "adrfam": "IPv4", 00:21:32.700 "traddr": "10.0.0.1", 00:21:32.700 "trsvcid": "47936" 00:21:32.700 }, 00:21:32.700 "auth": { 00:21:32.700 "state": "completed", 00:21:32.700 "digest": "sha384", 00:21:32.700 "dhgroup": "ffdhe8192" 00:21:32.700 } 00:21:32.700 } 00:21:32.700 ]' 00:21:32.700 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.700 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:32.700 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.700 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:32.700 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.700 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.700 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.700 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.959 07:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:21:32.959 07:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:21:33.895 07:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.895 07:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.895 07:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.895 07:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.895 07:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.895 07:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.895 07:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:33.895 07:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:34.153 07:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:34.153 07:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.153 07:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:34.153 07:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:34.153 07:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:34.153 07:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.153 07:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.153 07:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.153 07:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.153 07:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.153 07:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.153 07:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.153 07:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.087 00:21:35.087 07:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.087 07:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.087 07:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.345 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.345 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.345 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.345 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.345 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.345 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.345 { 00:21:35.345 "cntlid": 93, 00:21:35.345 "qid": 0, 00:21:35.345 "state": "enabled", 00:21:35.345 "thread": "nvmf_tgt_poll_group_000", 00:21:35.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:35.345 "listen_address": { 00:21:35.345 "trtype": "TCP", 00:21:35.345 "adrfam": "IPv4", 00:21:35.345 "traddr": "10.0.0.2", 00:21:35.345 "trsvcid": "4420" 00:21:35.345 }, 00:21:35.345 "peer_address": { 00:21:35.345 "trtype": "TCP", 00:21:35.345 "adrfam": "IPv4", 00:21:35.345 "traddr": "10.0.0.1", 00:21:35.345 "trsvcid": "47966" 00:21:35.345 }, 00:21:35.345 "auth": { 00:21:35.345 "state": "completed", 00:21:35.345 "digest": "sha384", 00:21:35.345 "dhgroup": "ffdhe8192" 00:21:35.345 } 00:21:35.345 } 00:21:35.345 ]' 00:21:35.345 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.345 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:35.345 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.345 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.345 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.345 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.345 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.346 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.604 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:21:35.604 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:21:36.541 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.541 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.541 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.541 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.800 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.800 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.800 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:36.800 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:37.058 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:37.058 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.058 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:37.058 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:37.058 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:37.058 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.058 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:37.058 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.058 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.058 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.058 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:37.058 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.058 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.993 00:21:37.993 07:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.993 07:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.993 07:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.251 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.251 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.251 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.251 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.251 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.251 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.251 { 00:21:38.251 "cntlid": 95, 00:21:38.251 "qid": 0, 00:21:38.251 "state": "enabled", 00:21:38.251 "thread": "nvmf_tgt_poll_group_000", 00:21:38.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:38.251 "listen_address": { 00:21:38.251 "trtype": "TCP", 00:21:38.251 "adrfam": "IPv4", 00:21:38.251 "traddr": "10.0.0.2", 00:21:38.251 "trsvcid": "4420" 00:21:38.251 }, 00:21:38.251 "peer_address": { 00:21:38.251 "trtype": "TCP", 00:21:38.251 "adrfam": "IPv4", 00:21:38.251 "traddr": "10.0.0.1", 00:21:38.251 "trsvcid": "33094" 00:21:38.251 }, 00:21:38.251 "auth": { 00:21:38.251 "state": "completed", 00:21:38.251 "digest": "sha384", 00:21:38.251 "dhgroup": "ffdhe8192" 00:21:38.251 } 00:21:38.251 } 00:21:38.251 ]' 00:21:38.251 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.251 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.251 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.251 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:38.251 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.251 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.251 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.251 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.820 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:21:38.820 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.757 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.327 00:21:40.327 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.327 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.327 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.586 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.586 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.586 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.586 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.586 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.586 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.586 { 00:21:40.586 "cntlid": 97, 00:21:40.586 "qid": 0, 00:21:40.586 "state": "enabled", 00:21:40.586 "thread": "nvmf_tgt_poll_group_000", 00:21:40.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:40.586 "listen_address": { 00:21:40.586 "trtype": "TCP", 00:21:40.586 "adrfam": "IPv4", 00:21:40.586 "traddr": "10.0.0.2", 00:21:40.586 "trsvcid": "4420" 00:21:40.586 }, 00:21:40.586 "peer_address": { 00:21:40.586 "trtype": "TCP", 00:21:40.586 "adrfam": "IPv4", 00:21:40.586 "traddr": "10.0.0.1", 00:21:40.586 "trsvcid": "33108" 00:21:40.586 }, 00:21:40.586 "auth": { 00:21:40.586 "state": "completed", 00:21:40.586 "digest": "sha512", 00:21:40.586 "dhgroup": "null" 00:21:40.586 } 00:21:40.586 } 00:21:40.586 ]' 00:21:40.586 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.586 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.586 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.586 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:40.586 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.586 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.586 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.586 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.846 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:21:40.846 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:21:41.786 07:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.786 07:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.786 07:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.786 07:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.786 07:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.786 07:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.786 07:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:41.786 07:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:42.045 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:42.045 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.045 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.045 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:42.045 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:42.045 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.045 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.045 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.045 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.045 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.045 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.045 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.045 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.611 00:21:42.611 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.611 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.611 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.869 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.869 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.869 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.869 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.869 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.869 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.869 { 00:21:42.869 "cntlid": 99, 00:21:42.869 "qid": 0, 00:21:42.869 "state": "enabled", 00:21:42.869 "thread": "nvmf_tgt_poll_group_000", 00:21:42.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:42.869 "listen_address": { 00:21:42.869 "trtype": "TCP", 00:21:42.869 "adrfam": "IPv4", 00:21:42.869 "traddr": "10.0.0.2", 00:21:42.869 "trsvcid": "4420" 00:21:42.869 }, 00:21:42.869 "peer_address": { 00:21:42.869 "trtype": "TCP", 00:21:42.869 "adrfam": "IPv4", 00:21:42.869 "traddr": "10.0.0.1", 00:21:42.869 "trsvcid": "33140" 00:21:42.869 }, 00:21:42.869 "auth": { 00:21:42.869 "state": "completed", 00:21:42.869 "digest": "sha512", 00:21:42.869 "dhgroup": "null" 00:21:42.869 } 00:21:42.869 } 00:21:42.869 ]' 00:21:42.869 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.869 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.869 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.869 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:42.869 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.869 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.869 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.869 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.128 07:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:21:43.128 07:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:21:44.067 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.067 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.067 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.067 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.067 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.067 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.067 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:44.067 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:44.325 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:44.325 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.325 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.325 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:44.325 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:44.325 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.325 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.325 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.325 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.325 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.325 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.325 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.325 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.583 00:21:44.583 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.583 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.583 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.843 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.843 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.843 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.843 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.101 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.101 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.101 { 00:21:45.101 "cntlid": 101, 00:21:45.101 "qid": 0, 00:21:45.101 "state": "enabled", 00:21:45.101 "thread": "nvmf_tgt_poll_group_000", 00:21:45.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:45.101 "listen_address": { 00:21:45.101 "trtype": "TCP", 00:21:45.101 "adrfam": "IPv4", 00:21:45.101 "traddr": "10.0.0.2", 00:21:45.101 "trsvcid": "4420" 00:21:45.101 }, 00:21:45.101 "peer_address": { 00:21:45.101 "trtype": "TCP", 00:21:45.101 "adrfam": "IPv4", 00:21:45.101 "traddr": "10.0.0.1", 00:21:45.101 "trsvcid": "33168" 00:21:45.101 }, 00:21:45.101 "auth": { 00:21:45.101 "state": "completed", 00:21:45.101 "digest": "sha512", 00:21:45.101 "dhgroup": "null" 00:21:45.101 } 00:21:45.101 } 00:21:45.101 ]' 00:21:45.101 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.101 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.101 07:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.101 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:45.101 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.101 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.101 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.101 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.359 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:21:45.359 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:21:46.355 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.355 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.355 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.355 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.355 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.355 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.355 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:46.355 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:46.613 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:46.613 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.613 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.613 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:46.613 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:46.613 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.613 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:46.613 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.613 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.613 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.613 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:46.613 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.613 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.872 00:21:46.872 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.872 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.872 07:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.130 07:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.130 07:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.130 07:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.130 07:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.130 07:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.130 07:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.130 { 00:21:47.130 "cntlid": 103, 00:21:47.130 "qid": 0, 00:21:47.130 "state": "enabled", 00:21:47.130 "thread": "nvmf_tgt_poll_group_000", 00:21:47.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:47.130 "listen_address": { 00:21:47.130 "trtype": "TCP", 00:21:47.130 "adrfam": "IPv4", 00:21:47.130 "traddr": "10.0.0.2", 00:21:47.130 "trsvcid": "4420" 00:21:47.130 }, 00:21:47.130 "peer_address": { 00:21:47.130 "trtype": "TCP", 00:21:47.130 "adrfam": "IPv4", 00:21:47.130 "traddr": "10.0.0.1", 00:21:47.130 "trsvcid": "33204" 00:21:47.130 }, 00:21:47.130 "auth": { 00:21:47.130 "state": "completed", 00:21:47.130 "digest": "sha512", 00:21:47.130 "dhgroup": "null" 00:21:47.130 } 00:21:47.130 } 00:21:47.130 ]' 00:21:47.130 07:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.130 07:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.130 07:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.130 07:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:47.130 07:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.388 07:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.388 07:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.388 07:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.648 07:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:21:47.648 07:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:21:48.584 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.584 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.584 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.584 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.584 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.584 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:48.584 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.584 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:48.584 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:48.842 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:48.842 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.842 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.842 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:48.843 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:48.843 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.843 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.843 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.843 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.843 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.843 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.843 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.843 07:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.101 00:21:49.101 07:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.101 07:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.101 07:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.359 07:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.359 07:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.359 07:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.359 07:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.359 07:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.359 07:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.359 { 00:21:49.359 "cntlid": 105, 00:21:49.359 "qid": 0, 00:21:49.359 "state": "enabled", 00:21:49.359 "thread": "nvmf_tgt_poll_group_000", 00:21:49.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:49.359 "listen_address": { 00:21:49.359 "trtype": "TCP", 00:21:49.359 "adrfam": "IPv4", 00:21:49.359 "traddr": "10.0.0.2", 00:21:49.359 "trsvcid": "4420" 00:21:49.359 }, 00:21:49.359 "peer_address": { 00:21:49.359 "trtype": "TCP", 00:21:49.359 "adrfam": "IPv4", 00:21:49.359 "traddr": "10.0.0.1", 00:21:49.359 "trsvcid": "42516" 00:21:49.359 }, 00:21:49.359 "auth": { 00:21:49.359 "state": "completed", 00:21:49.359 "digest": "sha512", 00:21:49.359 "dhgroup": "ffdhe2048" 00:21:49.359 } 00:21:49.359 } 00:21:49.359 ]' 00:21:49.359 07:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.359 07:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.359 07:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.359 07:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:49.359 07:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.618 07:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.618 07:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.618 07:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.876 07:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:21:49.876 07:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:21:50.814 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.814 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.814 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.814 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.814 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.814 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.814 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:50.814 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:50.814 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:50.814 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.814 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:50.814 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:50.814 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:50.814 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.814 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.814 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.814 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.074 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.074 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.074 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.074 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.333 00:21:51.333 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.333 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.333 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.592 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.592 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.592 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.592 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.592 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.592 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.592 { 00:21:51.592 "cntlid": 107, 00:21:51.592 "qid": 0, 00:21:51.592 "state": "enabled", 00:21:51.592 "thread": "nvmf_tgt_poll_group_000", 00:21:51.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:51.592 "listen_address": { 00:21:51.592 "trtype": "TCP", 00:21:51.592 "adrfam": "IPv4", 00:21:51.592 "traddr": "10.0.0.2", 00:21:51.592 "trsvcid": "4420" 00:21:51.592 }, 00:21:51.592 "peer_address": { 00:21:51.592 "trtype": "TCP", 00:21:51.592 "adrfam": "IPv4", 00:21:51.592 "traddr": "10.0.0.1", 00:21:51.592 "trsvcid": "42542" 00:21:51.592 }, 00:21:51.592 "auth": { 00:21:51.592 "state": "completed", 00:21:51.592 "digest": "sha512", 00:21:51.592 "dhgroup": "ffdhe2048" 00:21:51.592 } 00:21:51.592 } 00:21:51.592 ]' 00:21:51.592 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.592 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.592 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.592 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:51.592 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.592 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.592 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.592 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.851 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:21:51.851 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:21:52.785 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.785 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.785 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.785 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.785 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.785 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.785 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:52.785 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:53.044 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:53.044 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.044 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.044 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:53.044 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:53.044 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.044 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.044 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.044 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.044 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.044 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.044 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.044 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.614 00:21:53.614 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.614 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.614 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.614 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.614 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.614 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.614 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.614 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.614 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.614 { 00:21:53.614 "cntlid": 109, 00:21:53.614 "qid": 0, 00:21:53.614 "state": "enabled", 00:21:53.614 "thread": "nvmf_tgt_poll_group_000", 00:21:53.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:53.614 "listen_address": { 00:21:53.614 "trtype": "TCP", 00:21:53.614 "adrfam": "IPv4", 00:21:53.614 "traddr": "10.0.0.2", 00:21:53.614 "trsvcid": "4420" 00:21:53.614 }, 00:21:53.614 "peer_address": { 00:21:53.614 "trtype": "TCP", 00:21:53.614 "adrfam": "IPv4", 00:21:53.614 "traddr": "10.0.0.1", 00:21:53.614 "trsvcid": "42564" 00:21:53.614 }, 00:21:53.614 "auth": { 00:21:53.614 "state": "completed", 00:21:53.614 "digest": "sha512", 00:21:53.614 "dhgroup": "ffdhe2048" 00:21:53.614 } 00:21:53.614 } 00:21:53.614 ]' 00:21:53.614 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.873 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.873 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.873 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:53.873 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.873 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.873 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.873 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.131 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:21:54.131 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:21:55.071 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.071 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.071 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.071 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.071 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.071 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.071 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:55.071 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:55.329 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:55.329 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.329 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.329 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:55.329 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:55.329 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.329 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:55.329 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.329 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.329 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.329 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:55.329 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.329 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.896 00:21:55.896 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.896 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.896 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.896 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.896 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.896 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.896 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.896 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.896 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.896 { 00:21:55.896 "cntlid": 111, 00:21:55.896 "qid": 0, 00:21:55.896 "state": "enabled", 00:21:55.896 "thread": "nvmf_tgt_poll_group_000", 00:21:55.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:55.896 "listen_address": { 00:21:55.896 "trtype": "TCP", 00:21:55.896 "adrfam": "IPv4", 00:21:55.896 "traddr": "10.0.0.2", 00:21:55.896 "trsvcid": "4420" 00:21:55.896 }, 00:21:55.896 "peer_address": { 00:21:55.896 "trtype": "TCP", 00:21:55.896 "adrfam": "IPv4", 00:21:55.896 "traddr": "10.0.0.1", 00:21:55.896 "trsvcid": "42586" 00:21:55.896 }, 00:21:55.896 "auth": { 00:21:55.896 "state": "completed", 00:21:55.896 "digest": "sha512", 00:21:55.896 "dhgroup": "ffdhe2048" 00:21:55.896 } 00:21:55.896 } 00:21:55.896 ]' 00:21:55.896 07:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.154 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.154 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.154 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:56.154 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.154 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.154 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.154 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.411 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:21:56.411 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:21:57.348 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.348 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.348 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.348 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.348 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.348 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:57.348 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.348 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:57.348 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:57.606 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:57.606 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.606 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.606 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:57.606 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:57.606 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.606 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.606 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.606 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.606 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.606 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.606 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.606 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.864 00:21:57.865 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.865 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.865 07:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.123 07:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.123 07:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.123 07:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.123 07:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.123 07:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.123 07:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.123 { 00:21:58.123 "cntlid": 113, 00:21:58.123 "qid": 0, 00:21:58.123 "state": "enabled", 00:21:58.123 "thread": "nvmf_tgt_poll_group_000", 00:21:58.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:58.123 "listen_address": { 00:21:58.123 "trtype": "TCP", 00:21:58.123 "adrfam": "IPv4", 00:21:58.123 "traddr": "10.0.0.2", 00:21:58.123 "trsvcid": "4420" 00:21:58.123 }, 00:21:58.123 "peer_address": { 00:21:58.123 "trtype": "TCP", 00:21:58.123 "adrfam": "IPv4", 00:21:58.123 "traddr": "10.0.0.1", 00:21:58.123 "trsvcid": "39630" 00:21:58.123 }, 00:21:58.123 "auth": { 00:21:58.123 "state": "completed", 00:21:58.123 "digest": "sha512", 00:21:58.123 "dhgroup": "ffdhe3072" 00:21:58.123 } 00:21:58.123 } 00:21:58.123 ]' 00:21:58.123 07:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.381 07:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.381 07:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.381 07:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:58.381 07:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.381 07:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.381 07:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.381 07:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.639 07:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:21:58.640 07:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:21:59.575 07:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.576 07:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.576 07:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.576 07:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.576 07:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.576 07:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.576 07:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:59.576 07:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:59.834 07:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:59.834 07:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.834 07:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.834 07:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:59.834 07:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:59.834 07:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.835 07:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.835 07:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.835 07:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.835 07:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.835 07:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.835 07:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.835 07:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.404 00:22:00.404 07:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.404 07:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.404 07:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.663 07:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.663 07:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.663 07:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.663 07:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.663 07:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.663 07:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.663 { 00:22:00.663 "cntlid": 115, 00:22:00.663 "qid": 0, 00:22:00.663 "state": "enabled", 00:22:00.663 "thread": "nvmf_tgt_poll_group_000", 00:22:00.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:00.663 "listen_address": { 00:22:00.663 "trtype": "TCP", 00:22:00.663 "adrfam": "IPv4", 00:22:00.663 "traddr": "10.0.0.2", 00:22:00.663 "trsvcid": "4420" 00:22:00.663 }, 00:22:00.663 "peer_address": { 00:22:00.663 "trtype": "TCP", 00:22:00.663 "adrfam": "IPv4", 00:22:00.663 "traddr": "10.0.0.1", 00:22:00.663 "trsvcid": "39664" 00:22:00.663 }, 00:22:00.663 "auth": { 00:22:00.663 "state": "completed", 00:22:00.663 "digest": "sha512", 00:22:00.663 "dhgroup": "ffdhe3072" 00:22:00.663 } 00:22:00.663 } 00:22:00.663 ]' 00:22:00.663 07:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.663 07:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.663 07:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.663 07:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:00.663 07:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.663 07:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.663 07:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.663 07:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.921 07:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:22:00.921 07:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:22:01.856 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.856 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.856 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.856 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.856 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.856 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.856 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:01.856 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:02.114 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:02.114 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.114 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.114 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:02.114 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:02.114 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.114 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.114 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.114 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.114 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.114 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.114 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.114 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.682 00:22:02.682 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.682 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.682 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.941 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.941 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.941 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.941 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.941 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.941 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.941 { 00:22:02.941 "cntlid": 117, 00:22:02.941 "qid": 0, 00:22:02.941 "state": "enabled", 00:22:02.941 "thread": "nvmf_tgt_poll_group_000", 00:22:02.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:02.941 "listen_address": { 00:22:02.941 "trtype": "TCP", 00:22:02.941 "adrfam": "IPv4", 00:22:02.941 "traddr": "10.0.0.2", 00:22:02.941 "trsvcid": "4420" 00:22:02.941 }, 00:22:02.941 "peer_address": { 00:22:02.941 "trtype": "TCP", 00:22:02.941 "adrfam": "IPv4", 00:22:02.941 "traddr": "10.0.0.1", 00:22:02.941 "trsvcid": "39706" 00:22:02.941 }, 00:22:02.941 "auth": { 00:22:02.941 "state": "completed", 00:22:02.941 "digest": "sha512", 00:22:02.941 "dhgroup": "ffdhe3072" 00:22:02.941 } 00:22:02.941 } 00:22:02.941 ]' 00:22:02.941 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.941 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.941 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.941 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:02.941 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.941 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.941 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.941 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.200 07:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:22:03.200 07:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:22:04.137 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.137 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.137 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.137 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.137 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.137 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.137 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:04.137 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:04.396 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:04.396 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.396 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:04.396 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:04.396 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:04.396 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.396 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:04.396 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.396 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.396 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.396 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:04.396 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:04.396 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:04.962 00:22:04.962 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.962 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.962 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.220 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.220 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.220 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.220 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.220 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.220 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:05.220 { 00:22:05.220 "cntlid": 119, 00:22:05.220 "qid": 0, 00:22:05.220 "state": "enabled", 00:22:05.220 "thread": "nvmf_tgt_poll_group_000", 00:22:05.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:05.220 "listen_address": { 00:22:05.220 "trtype": "TCP", 00:22:05.220 "adrfam": "IPv4", 00:22:05.220 "traddr": "10.0.0.2", 00:22:05.220 "trsvcid": "4420" 00:22:05.220 }, 00:22:05.220 "peer_address": { 00:22:05.220 "trtype": "TCP", 00:22:05.220 "adrfam": "IPv4", 00:22:05.220 "traddr": "10.0.0.1", 00:22:05.220 "trsvcid": "39728" 00:22:05.220 }, 00:22:05.220 "auth": { 00:22:05.220 "state": "completed", 00:22:05.220 "digest": "sha512", 00:22:05.220 "dhgroup": "ffdhe3072" 00:22:05.220 } 00:22:05.220 } 00:22:05.220 ]' 00:22:05.220 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:05.220 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.220 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:05.220 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:05.220 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.220 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.220 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.220 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.788 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:22:05.788 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.721 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.289 00:22:07.289 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.289 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.289 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.548 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.548 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.548 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.548 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.548 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.548 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.548 { 00:22:07.548 "cntlid": 121, 00:22:07.548 "qid": 0, 00:22:07.548 "state": "enabled", 00:22:07.548 "thread": "nvmf_tgt_poll_group_000", 00:22:07.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:07.548 "listen_address": { 00:22:07.548 "trtype": "TCP", 00:22:07.548 "adrfam": "IPv4", 00:22:07.548 "traddr": "10.0.0.2", 00:22:07.548 "trsvcid": "4420" 00:22:07.548 }, 00:22:07.548 "peer_address": { 00:22:07.548 "trtype": "TCP", 00:22:07.548 "adrfam": "IPv4", 00:22:07.548 "traddr": "10.0.0.1", 00:22:07.548 "trsvcid": "53490" 00:22:07.548 }, 00:22:07.548 "auth": { 00:22:07.548 "state": "completed", 00:22:07.548 "digest": "sha512", 00:22:07.548 "dhgroup": "ffdhe4096" 00:22:07.548 } 00:22:07.548 } 00:22:07.548 ]' 00:22:07.548 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.548 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.548 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.548 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:07.548 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.548 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.548 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.548 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.113 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:22:08.113 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:22:08.678 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.678 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.678 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.678 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.678 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.678 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.678 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:08.678 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:09.247 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:09.247 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.247 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:09.247 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:09.247 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:09.247 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.247 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.247 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.247 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.247 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.247 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.247 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.247 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.505 00:22:09.505 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.505 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.505 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.763 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.763 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.763 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.763 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.763 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.763 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.763 { 00:22:09.763 "cntlid": 123, 00:22:09.763 "qid": 0, 00:22:09.763 "state": "enabled", 00:22:09.763 "thread": "nvmf_tgt_poll_group_000", 00:22:09.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:09.763 "listen_address": { 00:22:09.763 "trtype": "TCP", 00:22:09.763 "adrfam": "IPv4", 00:22:09.763 "traddr": "10.0.0.2", 00:22:09.763 "trsvcid": "4420" 00:22:09.763 }, 00:22:09.763 "peer_address": { 00:22:09.763 "trtype": "TCP", 00:22:09.763 "adrfam": "IPv4", 00:22:09.763 "traddr": "10.0.0.1", 00:22:09.763 "trsvcid": "53528" 00:22:09.763 }, 00:22:09.763 "auth": { 00:22:09.763 "state": "completed", 00:22:09.763 "digest": "sha512", 00:22:09.763 "dhgroup": "ffdhe4096" 00:22:09.763 } 00:22:09.763 } 00:22:09.763 ]' 00:22:09.763 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.763 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.763 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.022 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:10.022 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.022 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.022 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.022 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.280 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:22:10.280 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:22:11.282 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.282 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.282 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.282 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.282 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.282 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.282 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:11.282 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:11.540 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:11.540 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.540 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:11.540 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:11.540 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:11.540 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.540 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.540 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.540 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.540 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.540 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.540 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.540 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.798 00:22:11.798 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.798 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.798 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.056 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.056 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.056 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.056 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.056 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.056 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.056 { 00:22:12.056 "cntlid": 125, 00:22:12.056 "qid": 0, 00:22:12.056 "state": "enabled", 00:22:12.056 "thread": "nvmf_tgt_poll_group_000", 00:22:12.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:12.056 "listen_address": { 00:22:12.056 "trtype": "TCP", 00:22:12.056 "adrfam": "IPv4", 00:22:12.056 "traddr": "10.0.0.2", 00:22:12.056 "trsvcid": "4420" 00:22:12.056 }, 00:22:12.056 "peer_address": { 00:22:12.056 "trtype": "TCP", 00:22:12.056 "adrfam": "IPv4", 00:22:12.056 "traddr": "10.0.0.1", 00:22:12.056 "trsvcid": "53556" 00:22:12.056 }, 00:22:12.056 "auth": { 00:22:12.056 "state": "completed", 00:22:12.056 "digest": "sha512", 00:22:12.056 "dhgroup": "ffdhe4096" 00:22:12.056 } 00:22:12.056 } 00:22:12.056 ]' 00:22:12.056 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.056 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.056 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.314 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:12.314 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.314 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.314 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.314 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.572 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:22:12.572 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:22:13.509 07:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.509 07:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.509 07:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.509 07:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.509 07:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.509 07:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:13.509 07:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:13.509 07:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:13.767 07:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:13.767 07:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:13.767 07:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:13.767 07:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:13.767 07:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:13.767 07:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.768 07:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:13.768 07:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.768 07:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.768 07:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.768 07:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:13.768 07:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:13.768 07:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:14.025 00:22:14.025 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:14.025 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:14.026 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.285 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.285 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.285 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.285 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.285 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.285 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.285 { 00:22:14.285 "cntlid": 127, 00:22:14.285 "qid": 0, 00:22:14.285 "state": "enabled", 00:22:14.285 "thread": "nvmf_tgt_poll_group_000", 00:22:14.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:14.285 "listen_address": { 00:22:14.285 "trtype": "TCP", 00:22:14.285 "adrfam": "IPv4", 00:22:14.285 "traddr": "10.0.0.2", 00:22:14.285 "trsvcid": "4420" 00:22:14.285 }, 00:22:14.285 "peer_address": { 00:22:14.285 "trtype": "TCP", 00:22:14.285 "adrfam": "IPv4", 00:22:14.285 "traddr": "10.0.0.1", 00:22:14.285 "trsvcid": "53592" 00:22:14.285 }, 00:22:14.285 "auth": { 00:22:14.285 "state": "completed", 00:22:14.285 "digest": "sha512", 00:22:14.285 "dhgroup": "ffdhe4096" 00:22:14.285 } 00:22:14.285 } 00:22:14.285 ]' 00:22:14.285 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:14.542 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.542 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:14.542 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:14.542 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:14.542 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.542 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.542 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.800 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:22:14.800 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:22:15.736 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.736 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.736 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.736 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.736 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.736 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:15.736 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.736 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:15.736 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:15.995 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:15.995 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.995 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:15.995 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:15.995 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:15.995 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.995 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.995 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.995 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.995 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.995 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.995 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.995 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.562 00:22:16.562 07:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.562 07:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.562 07:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.820 07:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.820 07:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.820 07:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.820 07:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.820 07:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.820 07:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.820 { 00:22:16.820 "cntlid": 129, 00:22:16.820 "qid": 0, 00:22:16.820 "state": "enabled", 00:22:16.820 "thread": "nvmf_tgt_poll_group_000", 00:22:16.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:16.820 "listen_address": { 00:22:16.820 "trtype": "TCP", 00:22:16.820 "adrfam": "IPv4", 00:22:16.820 "traddr": "10.0.0.2", 00:22:16.820 "trsvcid": "4420" 00:22:16.820 }, 00:22:16.820 "peer_address": { 00:22:16.820 "trtype": "TCP", 00:22:16.820 "adrfam": "IPv4", 00:22:16.820 "traddr": "10.0.0.1", 00:22:16.820 "trsvcid": "53618" 00:22:16.820 }, 00:22:16.820 "auth": { 00:22:16.820 "state": "completed", 00:22:16.820 "digest": "sha512", 00:22:16.820 "dhgroup": "ffdhe6144" 00:22:16.820 } 00:22:16.820 } 00:22:16.820 ]' 00:22:16.820 07:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.820 07:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.820 07:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.820 07:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:16.820 07:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.078 07:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.078 07:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.078 07:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.338 07:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:22:17.338 07:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:22:18.273 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.273 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:18.273 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.273 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.273 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.273 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:18.273 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:18.273 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:18.531 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:18.531 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.531 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:18.531 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:18.531 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:18.531 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.531 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.531 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.531 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.531 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.531 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.531 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.531 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.100 00:22:19.100 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.100 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.100 07:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.358 07:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.358 07:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.358 07:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.358 07:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.358 07:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.358 07:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.358 { 00:22:19.358 "cntlid": 131, 00:22:19.358 "qid": 0, 00:22:19.358 "state": "enabled", 00:22:19.358 "thread": "nvmf_tgt_poll_group_000", 00:22:19.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:19.358 "listen_address": { 00:22:19.358 "trtype": "TCP", 00:22:19.358 "adrfam": "IPv4", 00:22:19.358 "traddr": "10.0.0.2", 00:22:19.358 "trsvcid": "4420" 00:22:19.358 }, 00:22:19.358 "peer_address": { 00:22:19.358 "trtype": "TCP", 00:22:19.358 "adrfam": "IPv4", 00:22:19.358 "traddr": "10.0.0.1", 00:22:19.358 "trsvcid": "42390" 00:22:19.358 }, 00:22:19.358 "auth": { 00:22:19.358 "state": "completed", 00:22:19.358 "digest": "sha512", 00:22:19.358 "dhgroup": "ffdhe6144" 00:22:19.358 } 00:22:19.358 } 00:22:19.358 ]' 00:22:19.359 07:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:19.359 07:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.359 07:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.359 07:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:19.359 07:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.359 07:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.359 07:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.359 07:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.617 07:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:22:19.617 07:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:22:20.553 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.553 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.553 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.553 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.553 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.553 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:20.553 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:20.553 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:20.811 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:20.811 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:20.811 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:20.811 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:20.811 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:20.811 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.811 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.811 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.811 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.811 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.811 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.811 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.811 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.380 00:22:21.380 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:21.380 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:21.380 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.638 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.638 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.638 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.638 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.638 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.638 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.638 { 00:22:21.638 "cntlid": 133, 00:22:21.638 "qid": 0, 00:22:21.638 "state": "enabled", 00:22:21.638 "thread": "nvmf_tgt_poll_group_000", 00:22:21.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:21.638 "listen_address": { 00:22:21.638 "trtype": "TCP", 00:22:21.638 "adrfam": "IPv4", 00:22:21.638 "traddr": "10.0.0.2", 00:22:21.638 "trsvcid": "4420" 00:22:21.638 }, 00:22:21.638 "peer_address": { 00:22:21.638 "trtype": "TCP", 00:22:21.638 "adrfam": "IPv4", 00:22:21.638 "traddr": "10.0.0.1", 00:22:21.638 "trsvcid": "42428" 00:22:21.638 }, 00:22:21.638 "auth": { 00:22:21.638 "state": "completed", 00:22:21.638 "digest": "sha512", 00:22:21.638 "dhgroup": "ffdhe6144" 00:22:21.638 } 00:22:21.638 } 00:22:21.638 ]' 00:22:21.638 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.638 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.638 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.638 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:21.638 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.638 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.638 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.638 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.897 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:22:21.897 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:22:22.832 07:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.832 07:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.832 07:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.833 07:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.091 07:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.091 07:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:23.091 07:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:23.091 07:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:23.349 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:23.349 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.349 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:23.349 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:23.349 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:23.349 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.349 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:23.349 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.349 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.349 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.349 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:23.349 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:23.349 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:23.918 00:22:23.918 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.918 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.918 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.177 07:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.177 07:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.177 07:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.177 07:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.177 07:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.177 07:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:24.177 { 00:22:24.177 "cntlid": 135, 00:22:24.177 "qid": 0, 00:22:24.177 "state": "enabled", 00:22:24.177 "thread": "nvmf_tgt_poll_group_000", 00:22:24.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:24.177 "listen_address": { 00:22:24.177 "trtype": "TCP", 00:22:24.177 "adrfam": "IPv4", 00:22:24.177 "traddr": "10.0.0.2", 00:22:24.177 "trsvcid": "4420" 00:22:24.177 }, 00:22:24.177 "peer_address": { 00:22:24.177 "trtype": "TCP", 00:22:24.177 "adrfam": "IPv4", 00:22:24.177 "traddr": "10.0.0.1", 00:22:24.177 "trsvcid": "42458" 00:22:24.177 }, 00:22:24.177 "auth": { 00:22:24.177 "state": "completed", 00:22:24.177 "digest": "sha512", 00:22:24.177 "dhgroup": "ffdhe6144" 00:22:24.177 } 00:22:24.177 } 00:22:24.177 ]' 00:22:24.177 07:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:24.177 07:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:24.177 07:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:24.177 07:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:24.177 07:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:24.177 07:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.177 07:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.177 07:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.436 07:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:22:24.436 07:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:22:25.370 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.370 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.370 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.370 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.370 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.370 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:25.370 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.370 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:25.370 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:25.628 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:25.628 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:25.628 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:25.628 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:25.628 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:25.628 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.628 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.628 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.628 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.628 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.628 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.628 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.628 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.563 00:22:26.563 07:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.563 07:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.563 07:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.821 07:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.821 07:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.821 07:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.821 07:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.821 07:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.821 07:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.821 { 00:22:26.821 "cntlid": 137, 00:22:26.821 "qid": 0, 00:22:26.821 "state": "enabled", 00:22:26.821 "thread": "nvmf_tgt_poll_group_000", 00:22:26.821 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:26.821 "listen_address": { 00:22:26.821 "trtype": "TCP", 00:22:26.821 "adrfam": "IPv4", 00:22:26.821 "traddr": "10.0.0.2", 00:22:26.821 "trsvcid": "4420" 00:22:26.821 }, 00:22:26.821 "peer_address": { 00:22:26.821 "trtype": "TCP", 00:22:26.821 "adrfam": "IPv4", 00:22:26.821 "traddr": "10.0.0.1", 00:22:26.821 "trsvcid": "42480" 00:22:26.821 }, 00:22:26.821 "auth": { 00:22:26.821 "state": "completed", 00:22:26.821 "digest": "sha512", 00:22:26.821 "dhgroup": "ffdhe8192" 00:22:26.821 } 00:22:26.821 } 00:22:26.821 ]' 00:22:26.821 07:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.821 07:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.821 07:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.821 07:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:26.821 07:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.080 07:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.080 07:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.080 07:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.338 07:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:22:27.338 07:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:22:28.274 07:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.274 07:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.274 07:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.274 07:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.274 07:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.274 07:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:28.274 07:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:28.274 07:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:28.532 07:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:28.532 07:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.532 07:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:28.532 07:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:28.533 07:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:28.533 07:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.533 07:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.533 07:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.533 07:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.533 07:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.533 07:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.533 07:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.533 07:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.470 00:22:29.470 07:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:29.470 07:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:29.470 07:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.728 07:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.728 07:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.728 07:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.728 07:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.728 07:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.728 07:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:29.728 { 00:22:29.728 "cntlid": 139, 00:22:29.728 "qid": 0, 00:22:29.728 "state": "enabled", 00:22:29.728 "thread": "nvmf_tgt_poll_group_000", 00:22:29.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:29.728 "listen_address": { 00:22:29.728 "trtype": "TCP", 00:22:29.728 "adrfam": "IPv4", 00:22:29.728 "traddr": "10.0.0.2", 00:22:29.728 "trsvcid": "4420" 00:22:29.728 }, 00:22:29.728 "peer_address": { 00:22:29.728 "trtype": "TCP", 00:22:29.728 "adrfam": "IPv4", 00:22:29.728 "traddr": "10.0.0.1", 00:22:29.728 "trsvcid": "44530" 00:22:29.728 }, 00:22:29.728 "auth": { 00:22:29.728 "state": "completed", 00:22:29.728 "digest": "sha512", 00:22:29.728 "dhgroup": "ffdhe8192" 00:22:29.728 } 00:22:29.728 } 00:22:29.728 ]' 00:22:29.728 07:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:29.728 07:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:29.728 07:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.728 07:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:29.728 07:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:29.728 07:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.728 07:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.728 07:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.986 07:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:22:29.987 07:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: --dhchap-ctrl-secret DHHC-1:02:NjJlMzA4NjE2Y2E2MjE2NDJkMWE2MDVkNTQzMTAyZmQwZmRhZGE4ZDhlN2YwNTQ4JND7rg==: 00:22:30.925 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.925 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:30.925 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.925 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.925 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.925 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:30.925 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:30.925 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:31.182 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:31.182 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.182 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:31.182 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:31.182 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:31.182 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.182 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.182 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.183 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.183 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.183 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.183 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.183 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.118 00:22:32.118 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.118 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.118 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.376 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.376 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.376 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.376 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.376 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.376 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.376 { 00:22:32.376 "cntlid": 141, 00:22:32.376 "qid": 0, 00:22:32.376 "state": "enabled", 00:22:32.376 "thread": "nvmf_tgt_poll_group_000", 00:22:32.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:32.376 "listen_address": { 00:22:32.376 "trtype": "TCP", 00:22:32.376 "adrfam": "IPv4", 00:22:32.376 "traddr": "10.0.0.2", 00:22:32.376 "trsvcid": "4420" 00:22:32.376 }, 00:22:32.376 "peer_address": { 00:22:32.376 "trtype": "TCP", 00:22:32.376 "adrfam": "IPv4", 00:22:32.376 "traddr": "10.0.0.1", 00:22:32.376 "trsvcid": "44554" 00:22:32.376 }, 00:22:32.376 "auth": { 00:22:32.376 "state": "completed", 00:22:32.376 "digest": "sha512", 00:22:32.376 "dhgroup": "ffdhe8192" 00:22:32.376 } 00:22:32.376 } 00:22:32.376 ]' 00:22:32.376 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.376 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.376 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.376 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:32.376 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.376 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.376 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.376 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.635 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:22:32.635 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:01:ZTMxOTQzNTcyMzcxZWU0Y2VhZjJiZDk3OTc1MjhiZWSgCByO: 00:22:33.570 07:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.570 07:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.570 07:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.570 07:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.570 07:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.570 07:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.570 07:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:33.570 07:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:33.828 07:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:33.828 07:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.828 07:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:33.828 07:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:33.828 07:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:33.828 07:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.828 07:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:33.828 07:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.828 07:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.828 07:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.828 07:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:33.828 07:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:33.828 07:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:34.765 00:22:34.765 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:34.765 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:34.765 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.025 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.025 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.025 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.025 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.025 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.025 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.025 { 00:22:35.025 "cntlid": 143, 00:22:35.025 "qid": 0, 00:22:35.025 "state": "enabled", 00:22:35.025 "thread": "nvmf_tgt_poll_group_000", 00:22:35.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:35.025 "listen_address": { 00:22:35.025 "trtype": "TCP", 00:22:35.025 "adrfam": "IPv4", 00:22:35.025 "traddr": "10.0.0.2", 00:22:35.025 "trsvcid": "4420" 00:22:35.025 }, 00:22:35.025 "peer_address": { 00:22:35.025 "trtype": "TCP", 00:22:35.025 "adrfam": "IPv4", 00:22:35.025 "traddr": "10.0.0.1", 00:22:35.025 "trsvcid": "44580" 00:22:35.025 }, 00:22:35.025 "auth": { 00:22:35.025 "state": "completed", 00:22:35.025 "digest": "sha512", 00:22:35.025 "dhgroup": "ffdhe8192" 00:22:35.025 } 00:22:35.025 } 00:22:35.025 ]' 00:22:35.025 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.025 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.025 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.025 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:35.025 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.025 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.025 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.025 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.282 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:22:35.282 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:22:36.247 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.247 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.247 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.247 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.247 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.247 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:36.247 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:36.247 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:36.248 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:36.248 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:36.248 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:36.531 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:36.531 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.531 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:36.531 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:36.531 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:36.531 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.531 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.531 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.531 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.531 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.531 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.531 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.531 07:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.472 00:22:37.472 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.472 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:37.472 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.731 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.731 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.731 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.731 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.731 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.731 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.731 { 00:22:37.731 "cntlid": 145, 00:22:37.731 "qid": 0, 00:22:37.731 "state": "enabled", 00:22:37.731 "thread": "nvmf_tgt_poll_group_000", 00:22:37.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:37.731 "listen_address": { 00:22:37.731 "trtype": "TCP", 00:22:37.731 "adrfam": "IPv4", 00:22:37.731 "traddr": "10.0.0.2", 00:22:37.731 "trsvcid": "4420" 00:22:37.731 }, 00:22:37.731 "peer_address": { 00:22:37.731 "trtype": "TCP", 00:22:37.731 "adrfam": "IPv4", 00:22:37.731 "traddr": "10.0.0.1", 00:22:37.731 "trsvcid": "60182" 00:22:37.731 }, 00:22:37.731 "auth": { 00:22:37.731 "state": "completed", 00:22:37.731 "digest": "sha512", 00:22:37.731 "dhgroup": "ffdhe8192" 00:22:37.731 } 00:22:37.731 } 00:22:37.731 ]' 00:22:37.731 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.731 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:37.731 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.731 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:37.731 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:37.731 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.731 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.731 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.991 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:22:37.991 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjE0Y2UxODg1ZDE3Yjc0M2M1MzQ5ZjBkNmIxODEyMTljYzczZmFjNjg5Y2EwNGQ4EzAdmg==: --dhchap-ctrl-secret DHHC-1:03:ZTZhYzA5NDZiOWQyMTZiZDE2MzhiYzJmNWUwOWQ5N2JhN2ExYWYxODdhYThmMjVjZmFiYTY2Yzc2ZTBhYWFiYbt1USg=: 00:22:38.930 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.189 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.189 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.189 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.189 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.189 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:39.189 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.189 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.189 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.189 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:39.189 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:39.189 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:39.189 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:39.189 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:39.189 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:39.189 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:39.189 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:39.189 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:39.189 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:40.129 request: 00:22:40.129 { 00:22:40.129 "name": "nvme0", 00:22:40.129 "trtype": "tcp", 00:22:40.129 "traddr": "10.0.0.2", 00:22:40.129 "adrfam": "ipv4", 00:22:40.129 "trsvcid": "4420", 00:22:40.129 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:40.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:40.129 "prchk_reftag": false, 00:22:40.129 "prchk_guard": false, 00:22:40.129 "hdgst": false, 00:22:40.129 "ddgst": false, 00:22:40.129 "dhchap_key": "key2", 00:22:40.129 "allow_unrecognized_csi": false, 00:22:40.129 "method": "bdev_nvme_attach_controller", 00:22:40.129 "req_id": 1 00:22:40.129 } 00:22:40.129 Got JSON-RPC error response 00:22:40.129 response: 00:22:40.129 { 00:22:40.129 "code": -5, 00:22:40.129 "message": "Input/output error" 00:22:40.129 } 00:22:40.129 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:40.129 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:40.129 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:40.129 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:40.129 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:40.130 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.130 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.130 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.130 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.130 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.130 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.130 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.130 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:40.130 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:40.130 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:40.130 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:40.130 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.130 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:40.130 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.130 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:40.130 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:40.130 07:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:40.698 request: 00:22:40.698 { 00:22:40.698 "name": "nvme0", 00:22:40.698 "trtype": "tcp", 00:22:40.698 "traddr": "10.0.0.2", 00:22:40.698 "adrfam": "ipv4", 00:22:40.698 "trsvcid": "4420", 00:22:40.698 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:40.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:40.698 "prchk_reftag": false, 00:22:40.698 "prchk_guard": false, 00:22:40.698 "hdgst": false, 00:22:40.698 "ddgst": false, 00:22:40.698 "dhchap_key": "key1", 00:22:40.698 "dhchap_ctrlr_key": "ckey2", 00:22:40.698 "allow_unrecognized_csi": false, 00:22:40.698 "method": "bdev_nvme_attach_controller", 00:22:40.698 "req_id": 1 00:22:40.698 } 00:22:40.698 Got JSON-RPC error response 00:22:40.698 response: 00:22:40.698 { 00:22:40.698 "code": -5, 00:22:40.698 "message": "Input/output error" 00:22:40.698 } 00:22:40.698 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:40.698 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:40.698 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:40.698 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:40.698 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:40.698 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.698 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.698 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.698 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:40.699 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.699 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.699 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.699 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.699 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:40.699 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.699 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:40.699 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.699 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:40.699 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.699 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.699 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.699 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.636 request: 00:22:41.636 { 00:22:41.636 "name": "nvme0", 00:22:41.636 "trtype": "tcp", 00:22:41.636 "traddr": "10.0.0.2", 00:22:41.636 "adrfam": "ipv4", 00:22:41.636 "trsvcid": "4420", 00:22:41.636 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:41.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:41.636 "prchk_reftag": false, 00:22:41.636 "prchk_guard": false, 00:22:41.636 "hdgst": false, 00:22:41.636 "ddgst": false, 00:22:41.636 "dhchap_key": "key1", 00:22:41.636 "dhchap_ctrlr_key": "ckey1", 00:22:41.636 "allow_unrecognized_csi": false, 00:22:41.636 "method": "bdev_nvme_attach_controller", 00:22:41.636 "req_id": 1 00:22:41.636 } 00:22:41.636 Got JSON-RPC error response 00:22:41.636 response: 00:22:41.636 { 00:22:41.636 "code": -5, 00:22:41.636 "message": "Input/output error" 00:22:41.636 } 00:22:41.636 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:41.636 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:41.636 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:41.636 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:41.636 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:41.636 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.637 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.637 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.637 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 733794 00:22:41.637 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 733794 ']' 00:22:41.637 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 733794 00:22:41.637 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:41.637 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.637 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 733794 00:22:41.637 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:41.637 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:41.637 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 733794' 00:22:41.637 killing process with pid 733794 00:22:41.637 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 733794 00:22:41.637 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 733794 00:22:41.895 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:41.895 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:41.895 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:41.895 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.895 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=756427 00:22:41.895 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:41.895 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 756427 00:22:41.895 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 756427 ']' 00:22:41.895 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.895 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:41.895 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.895 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:41.895 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.154 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.154 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:42.154 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:42.154 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:42.154 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.154 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.154 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:42.154 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 756427 00:22:42.154 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 756427 ']' 00:22:42.154 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.154 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:42.154 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.154 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:42.154 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.413 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.413 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:42.413 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:42.413 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.413 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.672 null0 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FOL 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.jKN ]] 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jKN 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Kvz 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.3io ]] 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3io 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.uc7 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Nnm ]] 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Nnm 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.672 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:42.673 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ok6 00:22:42.673 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.673 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.673 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.673 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:42.673 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:42.673 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:42.673 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:42.673 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:42.673 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:42.673 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.673 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:42.673 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.673 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.673 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.673 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:42.673 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:42.673 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:44.051 nvme0n1 00:22:44.051 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:44.051 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:44.051 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.309 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.309 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.309 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.309 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.309 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.309 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:44.309 { 00:22:44.309 "cntlid": 1, 00:22:44.309 "qid": 0, 00:22:44.309 "state": "enabled", 00:22:44.309 "thread": "nvmf_tgt_poll_group_000", 00:22:44.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:44.309 "listen_address": { 00:22:44.309 "trtype": "TCP", 00:22:44.309 "adrfam": "IPv4", 00:22:44.309 "traddr": "10.0.0.2", 00:22:44.309 "trsvcid": "4420" 00:22:44.309 }, 00:22:44.309 "peer_address": { 00:22:44.309 "trtype": "TCP", 00:22:44.309 "adrfam": "IPv4", 00:22:44.309 "traddr": "10.0.0.1", 00:22:44.309 "trsvcid": "60220" 00:22:44.309 }, 00:22:44.309 "auth": { 00:22:44.309 "state": "completed", 00:22:44.309 "digest": "sha512", 00:22:44.309 "dhgroup": "ffdhe8192" 00:22:44.309 } 00:22:44.309 } 00:22:44.309 ]' 00:22:44.309 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:44.567 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:44.567 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:44.567 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:44.567 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:44.567 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.567 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.567 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.826 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:22:44.826 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:22:45.762 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.762 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:45.762 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.762 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.762 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.762 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:45.762 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.762 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.762 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.762 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:45.762 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:46.020 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:46.020 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:46.020 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:46.020 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:46.020 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:46.020 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:46.020 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:46.020 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:46.020 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.020 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.280 request: 00:22:46.280 { 00:22:46.280 "name": "nvme0", 00:22:46.280 "trtype": "tcp", 00:22:46.280 "traddr": "10.0.0.2", 00:22:46.280 "adrfam": "ipv4", 00:22:46.280 "trsvcid": "4420", 00:22:46.280 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:46.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:46.280 "prchk_reftag": false, 00:22:46.280 "prchk_guard": false, 00:22:46.280 "hdgst": false, 00:22:46.280 "ddgst": false, 00:22:46.280 "dhchap_key": "key3", 00:22:46.280 "allow_unrecognized_csi": false, 00:22:46.280 "method": "bdev_nvme_attach_controller", 00:22:46.280 "req_id": 1 00:22:46.280 } 00:22:46.280 Got JSON-RPC error response 00:22:46.280 response: 00:22:46.280 { 00:22:46.280 "code": -5, 00:22:46.280 "message": "Input/output error" 00:22:46.280 } 00:22:46.280 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:46.280 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:46.280 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:46.280 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:46.280 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:46.280 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:46.280 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:46.280 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:46.537 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:46.537 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:46.537 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:46.537 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:46.537 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:46.537 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:46.537 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:46.537 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:46.537 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.537 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.795 request: 00:22:46.795 { 00:22:46.795 "name": "nvme0", 00:22:46.795 "trtype": "tcp", 00:22:46.795 "traddr": "10.0.0.2", 00:22:46.795 "adrfam": "ipv4", 00:22:46.795 "trsvcid": "4420", 00:22:46.795 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:46.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:46.795 "prchk_reftag": false, 00:22:46.795 "prchk_guard": false, 00:22:46.795 "hdgst": false, 00:22:46.795 "ddgst": false, 00:22:46.795 "dhchap_key": "key3", 00:22:46.795 "allow_unrecognized_csi": false, 00:22:46.795 "method": "bdev_nvme_attach_controller", 00:22:46.795 "req_id": 1 00:22:46.795 } 00:22:46.795 Got JSON-RPC error response 00:22:46.795 response: 00:22:46.795 { 00:22:46.795 "code": -5, 00:22:46.795 "message": "Input/output error" 00:22:46.795 } 00:22:46.795 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:46.795 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:46.795 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:46.795 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:46.795 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:46.795 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:46.795 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:46.795 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:46.795 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:46.795 07:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:47.054 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.054 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.054 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.054 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.054 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.054 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.054 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.054 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.054 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:47.054 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:47.054 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:47.054 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:47.054 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:47.054 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:47.054 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:47.054 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:47.054 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:47.054 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:47.623 request: 00:22:47.623 { 00:22:47.623 "name": "nvme0", 00:22:47.623 "trtype": "tcp", 00:22:47.623 "traddr": "10.0.0.2", 00:22:47.623 "adrfam": "ipv4", 00:22:47.623 "trsvcid": "4420", 00:22:47.623 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:47.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:47.623 "prchk_reftag": false, 00:22:47.623 "prchk_guard": false, 00:22:47.623 "hdgst": false, 00:22:47.623 "ddgst": false, 00:22:47.623 "dhchap_key": "key0", 00:22:47.623 "dhchap_ctrlr_key": "key1", 00:22:47.623 "allow_unrecognized_csi": false, 00:22:47.623 "method": "bdev_nvme_attach_controller", 00:22:47.623 "req_id": 1 00:22:47.623 } 00:22:47.623 Got JSON-RPC error response 00:22:47.623 response: 00:22:47.623 { 00:22:47.623 "code": -5, 00:22:47.623 "message": "Input/output error" 00:22:47.623 } 00:22:47.623 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:47.623 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:47.623 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:47.623 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:47.623 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:47.623 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:47.623 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:48.189 nvme0n1 00:22:48.189 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:48.189 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:48.189 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.447 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.447 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.447 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.706 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:48.706 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.706 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.706 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.706 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:48.706 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:48.706 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:50.087 nvme0n1 00:22:50.087 07:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:50.087 07:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:50.087 07:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.087 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.087 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:50.087 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.087 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.346 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.346 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:50.346 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:50.346 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.604 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.604 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:22:50.604 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: --dhchap-ctrl-secret DHHC-1:03:YTlhZWMxNjg4M2FhZWFlYmY1YWY3MTAwZTdlODRlN2YzMzE2MGE1MTZhYmRhMjVmOTVjMWU2OGYyY2JiYTU1MisCsxc=: 00:22:51.540 07:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:51.540 07:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:51.540 07:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:51.540 07:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:51.540 07:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:51.540 07:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:51.540 07:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:51.540 07:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.540 07:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.799 07:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:51.799 07:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:51.799 07:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:51.799 07:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:51.799 07:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:51.799 07:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:51.799 07:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:51.799 07:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:51.799 07:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:51.799 07:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:52.368 request: 00:22:52.368 { 00:22:52.368 "name": "nvme0", 00:22:52.368 "trtype": "tcp", 00:22:52.368 "traddr": "10.0.0.2", 00:22:52.368 "adrfam": "ipv4", 00:22:52.368 "trsvcid": "4420", 00:22:52.368 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:52.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:52.368 "prchk_reftag": false, 00:22:52.368 "prchk_guard": false, 00:22:52.368 "hdgst": false, 00:22:52.368 "ddgst": false, 00:22:52.368 "dhchap_key": "key1", 00:22:52.368 "allow_unrecognized_csi": false, 00:22:52.368 "method": "bdev_nvme_attach_controller", 00:22:52.368 "req_id": 1 00:22:52.368 } 00:22:52.368 Got JSON-RPC error response 00:22:52.368 response: 00:22:52.368 { 00:22:52.368 "code": -5, 00:22:52.368 "message": "Input/output error" 00:22:52.368 } 00:22:52.627 07:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:52.627 07:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:52.627 07:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:52.627 07:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:52.627 07:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:52.627 07:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:52.627 07:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:54.007 nvme0n1 00:22:54.007 07:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:54.007 07:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:54.007 07:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.266 07:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.266 07:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.266 07:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.524 07:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:54.524 07:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.524 07:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.524 07:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.524 07:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:54.524 07:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:54.524 07:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:54.782 nvme0n1 00:22:54.782 07:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:54.782 07:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:54.782 07:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.041 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.041 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.041 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.609 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:55.609 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.609 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.609 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.609 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: '' 2s 00:22:55.609 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:55.609 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:55.609 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: 00:22:55.609 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:55.609 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:55.609 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:55.609 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: ]] 00:22:55.609 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:M2U1OWVjMmZjZjdlYjU0NDgxZmYzNzBhZTY0M2MzMGZcvYyE: 00:22:55.609 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:55.609 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:55.609 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: 2s 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: ]] 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MmIwODM3YzI0ZjRlZmEzN2VjNWZkMjg0MzU0YmVmYWVmZjQwMWFmOWFhOThkZjAxoI9+iQ==: 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:57.516 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:59.426 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:59.426 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:59.685 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:59.685 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:59.685 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:59.685 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:59.685 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:59.685 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.685 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:59.685 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.685 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.685 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.685 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:59.685 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:59.685 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:01.066 nvme0n1 00:23:01.066 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:01.066 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.066 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.066 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.066 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:01.066 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:02.003 07:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:02.003 07:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:02.004 07:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.004 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.004 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:02.004 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.004 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.004 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.004 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:02.004 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:02.262 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:02.262 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.262 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:02.829 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.829 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:02.829 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.829 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.829 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.829 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:02.829 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:02.829 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:02.829 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:02.829 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.829 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:02.829 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.829 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:02.829 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:03.424 request: 00:23:03.424 { 00:23:03.424 "name": "nvme0", 00:23:03.424 "dhchap_key": "key1", 00:23:03.424 "dhchap_ctrlr_key": "key3", 00:23:03.424 "method": "bdev_nvme_set_keys", 00:23:03.424 "req_id": 1 00:23:03.424 } 00:23:03.424 Got JSON-RPC error response 00:23:03.424 response: 00:23:03.424 { 00:23:03.424 "code": -13, 00:23:03.424 "message": "Permission denied" 00:23:03.424 } 00:23:03.424 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:03.424 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:03.424 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:03.424 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:03.424 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:03.424 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:03.424 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.686 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:03.686 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:05.064 07:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:05.064 07:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:05.064 07:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.064 07:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:05.064 07:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:05.064 07:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.064 07:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.064 07:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.064 07:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:05.064 07:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:05.064 07:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:06.440 nvme0n1 00:23:06.440 07:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:06.440 07:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.440 07:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.440 07:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.440 07:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:06.440 07:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:06.440 07:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:06.440 07:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:06.440 07:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:06.440 07:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:06.440 07:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:06.440 07:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:06.440 07:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:07.377 request: 00:23:07.377 { 00:23:07.377 "name": "nvme0", 00:23:07.377 "dhchap_key": "key2", 00:23:07.377 "dhchap_ctrlr_key": "key0", 00:23:07.377 "method": "bdev_nvme_set_keys", 00:23:07.377 "req_id": 1 00:23:07.377 } 00:23:07.377 Got JSON-RPC error response 00:23:07.377 response: 00:23:07.377 { 00:23:07.377 "code": -13, 00:23:07.377 "message": "Permission denied" 00:23:07.377 } 00:23:07.377 07:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:07.377 07:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:07.377 07:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:07.377 07:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:07.377 07:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:07.377 07:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.377 07:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:07.635 07:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:07.635 07:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:08.570 07:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:08.570 07:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:08.570 07:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.828 07:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:08.828 07:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:08.828 07:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:08.828 07:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 733820 00:23:08.828 07:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 733820 ']' 00:23:08.828 07:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 733820 00:23:08.828 07:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:08.828 07:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.828 07:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 733820 00:23:08.828 07:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:08.828 07:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:08.828 07:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 733820' 00:23:08.828 killing process with pid 733820 00:23:08.828 07:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 733820 00:23:08.828 07:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 733820 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:09.398 rmmod nvme_tcp 00:23:09.398 rmmod nvme_fabrics 00:23:09.398 rmmod nvme_keyring 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 756427 ']' 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 756427 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 756427 ']' 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 756427 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 756427 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 756427' 00:23:09.398 killing process with pid 756427 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 756427 00:23:09.398 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 756427 00:23:09.659 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:09.659 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:09.659 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:09.659 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:09.659 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:23:09.659 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:09.659 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:09.659 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:09.659 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:09.659 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.659 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.659 07:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.560 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:11.560 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.FOL /tmp/spdk.key-sha256.Kvz /tmp/spdk.key-sha384.uc7 /tmp/spdk.key-sha512.ok6 /tmp/spdk.key-sha512.jKN /tmp/spdk.key-sha384.3io /tmp/spdk.key-sha256.Nnm '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:11.560 00:23:11.561 real 3m30.995s 00:23:11.561 user 8m15.772s 00:23:11.561 sys 0m27.907s 00:23:11.561 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:11.561 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.561 ************************************ 00:23:11.561 END TEST nvmf_auth_target 00:23:11.561 ************************************ 00:23:11.561 07:57:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:11.561 07:57:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:11.561 07:57:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:11.561 07:57:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:11.561 07:57:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:11.820 ************************************ 00:23:11.820 START TEST nvmf_bdevio_no_huge 00:23:11.820 ************************************ 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:11.820 * Looking for test storage... 00:23:11.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:11.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.820 --rc genhtml_branch_coverage=1 00:23:11.820 --rc genhtml_function_coverage=1 00:23:11.820 --rc genhtml_legend=1 00:23:11.820 --rc geninfo_all_blocks=1 00:23:11.820 --rc geninfo_unexecuted_blocks=1 00:23:11.820 00:23:11.820 ' 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:11.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.820 --rc genhtml_branch_coverage=1 00:23:11.820 --rc genhtml_function_coverage=1 00:23:11.820 --rc genhtml_legend=1 00:23:11.820 --rc geninfo_all_blocks=1 00:23:11.820 --rc geninfo_unexecuted_blocks=1 00:23:11.820 00:23:11.820 ' 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:11.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.820 --rc genhtml_branch_coverage=1 00:23:11.820 --rc genhtml_function_coverage=1 00:23:11.820 --rc genhtml_legend=1 00:23:11.820 --rc geninfo_all_blocks=1 00:23:11.820 --rc geninfo_unexecuted_blocks=1 00:23:11.820 00:23:11.820 ' 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:11.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.820 --rc genhtml_branch_coverage=1 00:23:11.820 --rc genhtml_function_coverage=1 00:23:11.820 --rc genhtml_legend=1 00:23:11.820 --rc geninfo_all_blocks=1 00:23:11.820 --rc geninfo_unexecuted_blocks=1 00:23:11.820 00:23:11.820 ' 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.820 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:11.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:11.821 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.351 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:14.352 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:14.352 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:14.352 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:14.352 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:14.352 07:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:14.352 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:14.352 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:14.352 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:14.352 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:14.352 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:14.352 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:14.352 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:14.352 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:14.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:23:14.352 00:23:14.352 --- 10.0.0.2 ping statistics --- 00:23:14.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.352 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:14.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:23:14.353 00:23:14.353 --- 10.0.0.1 ping statistics --- 00:23:14.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.353 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=761868 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 761868 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 761868 ']' 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.353 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.353 [2024-11-18 07:57:07.230024] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:14.353 [2024-11-18 07:57:07.230120] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:14.353 [2024-11-18 07:57:07.312434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:14.353 [2024-11-18 07:57:07.358307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.353 [2024-11-18 07:57:07.358365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.353 [2024-11-18 07:57:07.358379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.353 [2024-11-18 07:57:07.358391] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.353 [2024-11-18 07:57:07.358401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.353 [2024-11-18 07:57:07.359369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:14.353 [2024-11-18 07:57:07.359432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:23:14.353 [2024-11-18 07:57:07.359507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:23:14.353 [2024-11-18 07:57:07.359511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:14.611 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.611 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:23:14.611 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:14.611 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:14.611 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.611 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.611 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:14.611 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.611 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.611 [2024-11-18 07:57:07.504884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.611 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.611 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:14.611 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.611 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.611 Malloc0 00:23:14.611 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.611 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:14.611 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.611 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.611 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.611 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:14.611 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.611 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.612 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.612 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:14.612 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.612 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.612 [2024-11-18 07:57:07.543117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.612 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.612 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:14.612 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:14.612 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:23:14.612 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:23:14.612 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:14.612 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:14.612 { 00:23:14.612 "params": { 00:23:14.612 "name": "Nvme$subsystem", 00:23:14.612 "trtype": "$TEST_TRANSPORT", 00:23:14.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.612 "adrfam": "ipv4", 00:23:14.612 "trsvcid": "$NVMF_PORT", 00:23:14.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.612 "hdgst": ${hdgst:-false}, 00:23:14.612 "ddgst": ${ddgst:-false} 00:23:14.612 }, 00:23:14.612 "method": "bdev_nvme_attach_controller" 00:23:14.612 } 00:23:14.612 EOF 00:23:14.612 )") 00:23:14.612 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:23:14.612 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:23:14.612 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:23:14.612 07:57:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:14.612 "params": { 00:23:14.612 "name": "Nvme1", 00:23:14.612 "trtype": "tcp", 00:23:14.612 "traddr": "10.0.0.2", 00:23:14.612 "adrfam": "ipv4", 00:23:14.612 "trsvcid": "4420", 00:23:14.612 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.612 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:14.612 "hdgst": false, 00:23:14.612 "ddgst": false 00:23:14.612 }, 00:23:14.612 "method": "bdev_nvme_attach_controller" 00:23:14.612 }' 00:23:14.612 [2024-11-18 07:57:07.591958] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:14.612 [2024-11-18 07:57:07.592038] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid762071 ] 00:23:14.612 [2024-11-18 07:57:07.667822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:14.870 [2024-11-18 07:57:07.718439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.870 [2024-11-18 07:57:07.718508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.870 [2024-11-18 07:57:07.718512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.870 I/O targets: 00:23:14.870 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:14.870 00:23:14.870 00:23:14.870 CUnit - A unit testing framework for C - Version 2.1-3 00:23:14.870 http://cunit.sourceforge.net/ 00:23:14.870 00:23:14.870 00:23:14.870 Suite: bdevio tests on: Nvme1n1 00:23:14.870 Test: blockdev write read block ...passed 00:23:15.130 Test: blockdev write zeroes read block ...passed 00:23:15.130 Test: blockdev write zeroes read no split ...passed 00:23:15.130 Test: blockdev write zeroes read split ...passed 00:23:15.130 Test: blockdev write zeroes read split partial ...passed 00:23:15.130 Test: blockdev reset ...[2024-11-18 07:57:08.066743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:15.130 [2024-11-18 07:57:08.066865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12774b0 (9): Bad file descriptor 00:23:15.130 [2024-11-18 07:57:08.087065] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:23:15.130 passed 00:23:15.130 Test: blockdev write read 8 blocks ...passed 00:23:15.130 Test: blockdev write read size > 128k ...passed 00:23:15.130 Test: blockdev write read invalid size ...passed 00:23:15.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:15.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:15.130 Test: blockdev write read max offset ...passed 00:23:15.130 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:15.130 Test: blockdev writev readv 8 blocks ...passed 00:23:15.130 Test: blockdev writev readv 30 x 1block ...passed 00:23:15.389 Test: blockdev writev readv block ...passed 00:23:15.389 Test: blockdev writev readv size > 128k ...passed 00:23:15.389 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:15.389 Test: blockdev comparev and writev ...[2024-11-18 07:57:08.261880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.389 [2024-11-18 07:57:08.261917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.389 [2024-11-18 07:57:08.261943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.389 [2024-11-18 07:57:08.261962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.389 [2024-11-18 07:57:08.262269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.389 [2024-11-18 07:57:08.262295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:15.389 [2024-11-18 07:57:08.262329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.389 [2024-11-18 07:57:08.262348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:15.389 [2024-11-18 07:57:08.262686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.389 [2024-11-18 07:57:08.262712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:15.389 [2024-11-18 07:57:08.262735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.389 [2024-11-18 07:57:08.262752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:15.389 [2024-11-18 07:57:08.263060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.389 [2024-11-18 07:57:08.263085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:15.389 [2024-11-18 07:57:08.263108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.389 [2024-11-18 07:57:08.263125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:15.389 passed 00:23:15.389 Test: blockdev nvme passthru rw ...passed 00:23:15.389 Test: blockdev nvme passthru vendor specific ...[2024-11-18 07:57:08.345732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:15.389 [2024-11-18 07:57:08.345759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:15.389 [2024-11-18 07:57:08.345912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:15.389 [2024-11-18 07:57:08.345935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:15.389 [2024-11-18 07:57:08.346085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:15.389 [2024-11-18 07:57:08.346109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:15.389 [2024-11-18 07:57:08.346263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:15.389 [2024-11-18 07:57:08.346286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:15.389 passed 00:23:15.389 Test: blockdev nvme admin passthru ...passed 00:23:15.389 Test: blockdev copy ...passed 00:23:15.389 00:23:15.389 Run Summary: Type Total Ran Passed Failed Inactive 00:23:15.389 suites 1 1 n/a 0 0 00:23:15.389 tests 23 23 23 0 0 00:23:15.389 asserts 152 152 152 0 n/a 00:23:15.389 00:23:15.389 Elapsed time = 0.982 seconds 00:23:15.648 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:15.648 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.648 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:15.648 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.648 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:15.648 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:15.648 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:15.648 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:15.648 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:15.648 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:15.648 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:15.648 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:15.648 rmmod nvme_tcp 00:23:15.648 rmmod nvme_fabrics 00:23:15.907 rmmod nvme_keyring 00:23:15.907 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:15.907 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:15.907 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:15.907 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 761868 ']' 00:23:15.907 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 761868 00:23:15.907 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 761868 ']' 00:23:15.907 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 761868 00:23:15.907 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:23:15.907 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.907 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 761868 00:23:15.907 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:23:15.907 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:23:15.907 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 761868' 00:23:15.907 killing process with pid 761868 00:23:15.907 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 761868 00:23:15.907 07:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 761868 00:23:16.165 07:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:16.165 07:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:16.165 07:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:16.165 07:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:16.165 07:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:23:16.165 07:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:16.165 07:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:23:16.165 07:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:16.165 07:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:16.165 07:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.165 07:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.165 07:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:18.701 00:23:18.701 real 0m6.550s 00:23:18.701 user 0m9.708s 00:23:18.701 sys 0m2.629s 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:18.701 ************************************ 00:23:18.701 END TEST nvmf_bdevio_no_huge 00:23:18.701 ************************************ 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:18.701 ************************************ 00:23:18.701 START TEST nvmf_tls 00:23:18.701 ************************************ 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:18.701 * Looking for test storage... 00:23:18.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:18.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.701 --rc genhtml_branch_coverage=1 00:23:18.701 --rc genhtml_function_coverage=1 00:23:18.701 --rc genhtml_legend=1 00:23:18.701 --rc geninfo_all_blocks=1 00:23:18.701 --rc geninfo_unexecuted_blocks=1 00:23:18.701 00:23:18.701 ' 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:18.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.701 --rc genhtml_branch_coverage=1 00:23:18.701 --rc genhtml_function_coverage=1 00:23:18.701 --rc genhtml_legend=1 00:23:18.701 --rc geninfo_all_blocks=1 00:23:18.701 --rc geninfo_unexecuted_blocks=1 00:23:18.701 00:23:18.701 ' 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:18.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.701 --rc genhtml_branch_coverage=1 00:23:18.701 --rc genhtml_function_coverage=1 00:23:18.701 --rc genhtml_legend=1 00:23:18.701 --rc geninfo_all_blocks=1 00:23:18.701 --rc geninfo_unexecuted_blocks=1 00:23:18.701 00:23:18.701 ' 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:18.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.701 --rc genhtml_branch_coverage=1 00:23:18.701 --rc genhtml_function_coverage=1 00:23:18.701 --rc genhtml_legend=1 00:23:18.701 --rc geninfo_all_blocks=1 00:23:18.701 --rc geninfo_unexecuted_blocks=1 00:23:18.701 00:23:18.701 ' 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.701 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:18.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:18.702 07:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.604 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.604 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:20.604 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:20.604 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:20.604 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:20.604 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:20.604 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:20.604 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:20.604 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:20.604 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:20.604 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:20.604 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:20.604 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:20.604 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:20.604 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:20.604 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.604 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.604 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.604 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:20.605 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:20.605 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:20.605 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:20.605 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:20.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:23:20.605 00:23:20.605 --- 10.0.0.2 ping statistics --- 00:23:20.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.605 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:23:20.605 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:23:20.866 00:23:20.866 --- 10.0.0.1 ping statistics --- 00:23:20.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.866 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=764538 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 764538 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 764538 ']' 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.866 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.866 [2024-11-18 07:57:13.771205] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:20.866 [2024-11-18 07:57:13.771293] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.866 [2024-11-18 07:57:13.847558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.866 [2024-11-18 07:57:13.894604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.866 [2024-11-18 07:57:13.894666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.866 [2024-11-18 07:57:13.894680] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.866 [2024-11-18 07:57:13.894691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.866 [2024-11-18 07:57:13.894701] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.866 [2024-11-18 07:57:13.895302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.125 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.125 07:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:21.125 07:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:21.125 07:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:21.125 07:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.125 07:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.125 07:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:21.125 07:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:21.383 true 00:23:21.383 07:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:21.383 07:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:21.640 07:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:21.640 07:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:21.640 07:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:21.898 07:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:21.898 07:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:22.157 07:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:22.157 07:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:22.157 07:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:22.416 07:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:22.416 07:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:22.674 07:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:22.674 07:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:22.674 07:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:22.674 07:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:22.932 07:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:22.932 07:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:22.932 07:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:23.190 07:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:23.190 07:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:23.448 07:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:23.448 07:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:23.448 07:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:24.015 07:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:24.015 07:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:24.015 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:24.015 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:24.015 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:24.015 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:24.015 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:24.015 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:24.015 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:24.015 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:24.015 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:24.274 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:24.275 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:24.275 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:24.275 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:24.275 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:24.275 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:23:24.275 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:24.275 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:24.275 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:24.275 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:24.275 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.9gaA9XUlw5 00:23:24.275 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:24.275 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.ULLX64rLaX 00:23:24.275 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:24.275 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:24.275 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.9gaA9XUlw5 00:23:24.275 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.ULLX64rLaX 00:23:24.275 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:24.535 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:24.794 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.9gaA9XUlw5 00:23:24.794 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9gaA9XUlw5 00:23:24.794 07:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:25.052 [2024-11-18 07:57:18.061963] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.052 07:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:25.618 07:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:25.876 [2024-11-18 07:57:18.715690] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:25.876 [2024-11-18 07:57:18.715948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.876 07:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:26.180 malloc0 00:23:26.180 07:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:26.461 07:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9gaA9XUlw5 00:23:26.738 07:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:26.998 07:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.9gaA9XUlw5 00:23:36.986 Initializing NVMe Controllers 00:23:36.986 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:36.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:36.987 Initialization complete. Launching workers. 00:23:36.987 ======================================================== 00:23:36.987 Latency(us) 00:23:36.987 Device Information : IOPS MiB/s Average min max 00:23:36.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8669.79 33.87 7383.96 1000.44 9125.63 00:23:36.987 ======================================================== 00:23:36.987 Total : 8669.79 33.87 7383.96 1000.44 9125.63 00:23:36.987 00:23:36.987 07:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9gaA9XUlw5 00:23:36.987 07:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:36.987 07:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:36.987 07:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:36.987 07:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9gaA9XUlw5 00:23:36.987 07:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:36.987 07:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=766439 00:23:36.987 07:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:36.987 07:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:36.987 07:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 766439 /var/tmp/bdevperf.sock 00:23:36.987 07:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 766439 ']' 00:23:36.987 07:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.987 07:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.987 07:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.987 07:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.987 07:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.987 [2024-11-18 07:57:30.059338] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:36.987 [2024-11-18 07:57:30.059435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid766439 ] 00:23:37.245 [2024-11-18 07:57:30.130708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.245 [2024-11-18 07:57:30.179917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.245 07:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.245 07:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:37.245 07:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9gaA9XUlw5 00:23:37.815 07:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:37.815 [2024-11-18 07:57:30.852556] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.075 TLSTESTn1 00:23:38.075 07:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:38.075 Running I/O for 10 seconds... 00:23:40.395 3521.00 IOPS, 13.75 MiB/s [2024-11-18T06:57:34.421Z] 3577.00 IOPS, 13.97 MiB/s [2024-11-18T06:57:35.362Z] 3595.33 IOPS, 14.04 MiB/s [2024-11-18T06:57:36.300Z] 3600.50 IOPS, 14.06 MiB/s [2024-11-18T06:57:37.241Z] 3595.80 IOPS, 14.05 MiB/s [2024-11-18T06:57:38.180Z] 3599.33 IOPS, 14.06 MiB/s [2024-11-18T06:57:39.123Z] 3600.71 IOPS, 14.07 MiB/s [2024-11-18T06:57:40.519Z] 3600.00 IOPS, 14.06 MiB/s [2024-11-18T06:57:41.457Z] 3597.44 IOPS, 14.05 MiB/s [2024-11-18T06:57:41.457Z] 3597.90 IOPS, 14.05 MiB/s 00:23:48.369 Latency(us) 00:23:48.369 [2024-11-18T06:57:41.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.369 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:48.369 Verification LBA range: start 0x0 length 0x2000 00:23:48.369 TLSTESTn1 : 10.03 3600.16 14.06 0.00 0.00 35488.60 6747.78 35340.89 00:23:48.369 [2024-11-18T06:57:41.457Z] =================================================================================================================== 00:23:48.369 [2024-11-18T06:57:41.457Z] Total : 3600.16 14.06 0.00 0.00 35488.60 6747.78 35340.89 00:23:48.369 { 00:23:48.369 "results": [ 00:23:48.369 { 00:23:48.369 "job": "TLSTESTn1", 00:23:48.369 "core_mask": "0x4", 00:23:48.369 "workload": "verify", 00:23:48.369 "status": "finished", 00:23:48.369 "verify_range": { 00:23:48.369 "start": 0, 00:23:48.369 "length": 8192 00:23:48.369 }, 00:23:48.369 "queue_depth": 128, 00:23:48.369 "io_size": 4096, 00:23:48.369 "runtime": 10.028441, 00:23:48.369 "iops": 3600.1607827178723, 00:23:48.369 "mibps": 14.063128057491689, 00:23:48.369 "io_failed": 0, 00:23:48.369 "io_timeout": 0, 00:23:48.369 "avg_latency_us": 35488.597682087144, 00:23:48.369 "min_latency_us": 6747.780740740741, 00:23:48.369 "max_latency_us": 35340.89481481481 00:23:48.369 } 00:23:48.369 ], 00:23:48.369 "core_count": 1 00:23:48.369 } 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 766439 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 766439 ']' 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 766439 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 766439 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 766439' 00:23:48.369 killing process with pid 766439 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 766439 00:23:48.369 Received shutdown signal, test time was about 10.000000 seconds 00:23:48.369 00:23:48.369 Latency(us) 00:23:48.369 [2024-11-18T06:57:41.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.369 [2024-11-18T06:57:41.457Z] =================================================================================================================== 00:23:48.369 [2024-11-18T06:57:41.457Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 766439 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ULLX64rLaX 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ULLX64rLaX 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ULLX64rLaX 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ULLX64rLaX 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=767760 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 767760 /var/tmp/bdevperf.sock 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 767760 ']' 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.369 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.369 [2024-11-18 07:57:41.388454] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:48.369 [2024-11-18 07:57:41.388569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid767760 ] 00:23:48.369 [2024-11-18 07:57:41.457084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.629 [2024-11-18 07:57:41.506177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.629 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:48.629 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:48.629 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ULLX64rLaX 00:23:48.888 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:49.148 [2024-11-18 07:57:42.202079] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:49.148 [2024-11-18 07:57:42.209765] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:49.148 [2024-11-18 07:57:42.210270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa736e0 (107): Transport endpoint is not connected 00:23:49.148 [2024-11-18 07:57:42.211260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa736e0 (9): Bad file descriptor 00:23:49.148 [2024-11-18 07:57:42.212260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:49.148 [2024-11-18 07:57:42.212287] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:49.149 [2024-11-18 07:57:42.212302] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:49.149 [2024-11-18 07:57:42.212320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:49.149 request: 00:23:49.149 { 00:23:49.149 "name": "TLSTEST", 00:23:49.149 "trtype": "tcp", 00:23:49.149 "traddr": "10.0.0.2", 00:23:49.149 "adrfam": "ipv4", 00:23:49.149 "trsvcid": "4420", 00:23:49.149 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:49.149 "prchk_reftag": false, 00:23:49.149 "prchk_guard": false, 00:23:49.149 "hdgst": false, 00:23:49.149 "ddgst": false, 00:23:49.149 "psk": "key0", 00:23:49.149 "allow_unrecognized_csi": false, 00:23:49.149 "method": "bdev_nvme_attach_controller", 00:23:49.149 "req_id": 1 00:23:49.149 } 00:23:49.149 Got JSON-RPC error response 00:23:49.149 response: 00:23:49.149 { 00:23:49.149 "code": -5, 00:23:49.149 "message": "Input/output error" 00:23:49.149 } 00:23:49.149 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 767760 00:23:49.149 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 767760 ']' 00:23:49.149 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 767760 00:23:49.149 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:49.408 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.408 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 767760 00:23:49.408 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:49.408 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:49.408 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 767760' 00:23:49.408 killing process with pid 767760 00:23:49.408 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 767760 00:23:49.408 Received shutdown signal, test time was about 10.000000 seconds 00:23:49.408 00:23:49.408 Latency(us) 00:23:49.408 [2024-11-18T06:57:42.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.408 [2024-11-18T06:57:42.496Z] =================================================================================================================== 00:23:49.408 [2024-11-18T06:57:42.496Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:49.408 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 767760 00:23:49.408 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:49.408 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:49.408 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:49.408 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:49.408 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9gaA9XUlw5 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9gaA9XUlw5 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9gaA9XUlw5 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9gaA9XUlw5 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=767899 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 767899 /var/tmp/bdevperf.sock 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 767899 ']' 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:49.409 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.667 [2024-11-18 07:57:42.519917] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:49.667 [2024-11-18 07:57:42.520006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid767899 ] 00:23:49.667 [2024-11-18 07:57:42.592067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.667 [2024-11-18 07:57:42.639245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.925 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:49.925 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:49.925 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9gaA9XUlw5 00:23:50.184 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:50.443 [2024-11-18 07:57:43.295395] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:50.443 [2024-11-18 07:57:43.302413] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:50.443 [2024-11-18 07:57:43.302443] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:50.443 [2024-11-18 07:57:43.302502] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:50.443 [2024-11-18 07:57:43.302663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12bb6e0 (107): Transport endpoint is not connected 00:23:50.443 [2024-11-18 07:57:43.303654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12bb6e0 (9): Bad file descriptor 00:23:50.443 [2024-11-18 07:57:43.304654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:50.443 [2024-11-18 07:57:43.304676] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:50.443 [2024-11-18 07:57:43.304690] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:50.443 [2024-11-18 07:57:43.304723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:50.443 request: 00:23:50.443 { 00:23:50.443 "name": "TLSTEST", 00:23:50.443 "trtype": "tcp", 00:23:50.443 "traddr": "10.0.0.2", 00:23:50.443 "adrfam": "ipv4", 00:23:50.443 "trsvcid": "4420", 00:23:50.443 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.443 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:50.443 "prchk_reftag": false, 00:23:50.443 "prchk_guard": false, 00:23:50.443 "hdgst": false, 00:23:50.443 "ddgst": false, 00:23:50.443 "psk": "key0", 00:23:50.443 "allow_unrecognized_csi": false, 00:23:50.443 "method": "bdev_nvme_attach_controller", 00:23:50.443 "req_id": 1 00:23:50.443 } 00:23:50.443 Got JSON-RPC error response 00:23:50.443 response: 00:23:50.443 { 00:23:50.443 "code": -5, 00:23:50.443 "message": "Input/output error" 00:23:50.443 } 00:23:50.443 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 767899 00:23:50.443 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 767899 ']' 00:23:50.443 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 767899 00:23:50.443 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:50.443 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.443 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 767899 00:23:50.443 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:50.443 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:50.443 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 767899' 00:23:50.443 killing process with pid 767899 00:23:50.443 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 767899 00:23:50.443 Received shutdown signal, test time was about 10.000000 seconds 00:23:50.443 00:23:50.443 Latency(us) 00:23:50.443 [2024-11-18T06:57:43.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.443 [2024-11-18T06:57:43.531Z] =================================================================================================================== 00:23:50.443 [2024-11-18T06:57:43.531Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:50.443 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 767899 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9gaA9XUlw5 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9gaA9XUlw5 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9gaA9XUlw5 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9gaA9XUlw5 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=768047 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 768047 /var/tmp/bdevperf.sock 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 768047 ']' 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.701 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.701 [2024-11-18 07:57:43.608552] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:50.701 [2024-11-18 07:57:43.608644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid768047 ] 00:23:50.701 [2024-11-18 07:57:43.677171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.701 [2024-11-18 07:57:43.719609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.960 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.960 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:50.960 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9gaA9XUlw5 00:23:51.218 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.477 [2024-11-18 07:57:44.367070] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.477 [2024-11-18 07:57:44.375262] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:51.477 [2024-11-18 07:57:44.375293] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:51.477 [2024-11-18 07:57:44.375345] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:51.478 [2024-11-18 07:57:44.376228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24016e0 (107): Transport endpoint is not connected 00:23:51.478 [2024-11-18 07:57:44.377219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24016e0 (9): Bad file descriptor 00:23:51.478 [2024-11-18 07:57:44.378219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:51.478 [2024-11-18 07:57:44.378240] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:51.478 [2024-11-18 07:57:44.378254] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:51.478 [2024-11-18 07:57:44.378273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:51.478 request: 00:23:51.478 { 00:23:51.478 "name": "TLSTEST", 00:23:51.478 "trtype": "tcp", 00:23:51.478 "traddr": "10.0.0.2", 00:23:51.478 "adrfam": "ipv4", 00:23:51.478 "trsvcid": "4420", 00:23:51.478 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:51.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.478 "prchk_reftag": false, 00:23:51.478 "prchk_guard": false, 00:23:51.478 "hdgst": false, 00:23:51.478 "ddgst": false, 00:23:51.478 "psk": "key0", 00:23:51.478 "allow_unrecognized_csi": false, 00:23:51.478 "method": "bdev_nvme_attach_controller", 00:23:51.478 "req_id": 1 00:23:51.478 } 00:23:51.478 Got JSON-RPC error response 00:23:51.478 response: 00:23:51.478 { 00:23:51.478 "code": -5, 00:23:51.478 "message": "Input/output error" 00:23:51.478 } 00:23:51.478 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 768047 00:23:51.478 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 768047 ']' 00:23:51.478 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 768047 00:23:51.478 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:51.478 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.478 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 768047 00:23:51.478 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:51.478 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:51.478 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 768047' 00:23:51.478 killing process with pid 768047 00:23:51.478 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 768047 00:23:51.478 Received shutdown signal, test time was about 10.000000 seconds 00:23:51.478 00:23:51.478 Latency(us) 00:23:51.478 [2024-11-18T06:57:44.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.478 [2024-11-18T06:57:44.566Z] =================================================================================================================== 00:23:51.478 [2024-11-18T06:57:44.566Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:51.478 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 768047 00:23:51.736 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:51.736 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:51.736 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:51.736 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:51.736 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:51.736 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=768188 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 768188 /var/tmp/bdevperf.sock 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 768188 ']' 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.737 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.737 [2024-11-18 07:57:44.679690] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:51.737 [2024-11-18 07:57:44.679797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid768188 ] 00:23:51.737 [2024-11-18 07:57:44.748165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.737 [2024-11-18 07:57:44.793747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.994 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.994 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:51.994 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:52.252 [2024-11-18 07:57:45.158439] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:52.252 [2024-11-18 07:57:45.158512] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:52.252 request: 00:23:52.252 { 00:23:52.252 "name": "key0", 00:23:52.252 "path": "", 00:23:52.252 "method": "keyring_file_add_key", 00:23:52.252 "req_id": 1 00:23:52.252 } 00:23:52.252 Got JSON-RPC error response 00:23:52.252 response: 00:23:52.252 { 00:23:52.252 "code": -1, 00:23:52.252 "message": "Operation not permitted" 00:23:52.252 } 00:23:52.252 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:52.513 [2024-11-18 07:57:45.439304] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:52.513 [2024-11-18 07:57:45.439375] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:52.513 request: 00:23:52.513 { 00:23:52.513 "name": "TLSTEST", 00:23:52.513 "trtype": "tcp", 00:23:52.513 "traddr": "10.0.0.2", 00:23:52.513 "adrfam": "ipv4", 00:23:52.513 "trsvcid": "4420", 00:23:52.513 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.513 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.513 "prchk_reftag": false, 00:23:52.513 "prchk_guard": false, 00:23:52.513 "hdgst": false, 00:23:52.513 "ddgst": false, 00:23:52.513 "psk": "key0", 00:23:52.513 "allow_unrecognized_csi": false, 00:23:52.513 "method": "bdev_nvme_attach_controller", 00:23:52.513 "req_id": 1 00:23:52.513 } 00:23:52.513 Got JSON-RPC error response 00:23:52.513 response: 00:23:52.513 { 00:23:52.513 "code": -126, 00:23:52.513 "message": "Required key not available" 00:23:52.513 } 00:23:52.513 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 768188 00:23:52.513 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 768188 ']' 00:23:52.513 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 768188 00:23:52.513 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:52.513 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.513 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 768188 00:23:52.513 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:52.513 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:52.513 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 768188' 00:23:52.513 killing process with pid 768188 00:23:52.513 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 768188 00:23:52.513 Received shutdown signal, test time was about 10.000000 seconds 00:23:52.513 00:23:52.513 Latency(us) 00:23:52.513 [2024-11-18T06:57:45.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.513 [2024-11-18T06:57:45.601Z] =================================================================================================================== 00:23:52.513 [2024-11-18T06:57:45.601Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:52.513 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 768188 00:23:52.773 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:52.773 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:52.773 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:52.773 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:52.773 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:52.773 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 764538 00:23:52.773 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 764538 ']' 00:23:52.773 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 764538 00:23:52.773 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:52.773 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.773 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 764538 00:23:52.773 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:52.773 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:52.773 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 764538' 00:23:52.773 killing process with pid 764538 00:23:52.773 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 764538 00:23:52.773 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 764538 00:23:53.032 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.JeruRQ0A6m 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.JeruRQ0A6m 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=768338 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 768338 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 768338 ']' 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.033 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.033 [2024-11-18 07:57:46.047022] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:53.033 [2024-11-18 07:57:46.047132] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.033 [2024-11-18 07:57:46.121287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.291 [2024-11-18 07:57:46.164408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.291 [2024-11-18 07:57:46.164474] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.291 [2024-11-18 07:57:46.164507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.291 [2024-11-18 07:57:46.164520] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.291 [2024-11-18 07:57:46.164530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.291 [2024-11-18 07:57:46.165089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.291 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.291 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:53.291 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.291 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:53.291 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.291 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.291 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.JeruRQ0A6m 00:23:53.291 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JeruRQ0A6m 00:23:53.291 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:53.549 [2024-11-18 07:57:46.549835] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.549 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:53.808 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:54.067 [2024-11-18 07:57:47.079175] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:54.067 [2024-11-18 07:57:47.079418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.067 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:54.326 malloc0 00:23:54.326 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:54.585 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JeruRQ0A6m 00:23:55.151 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:55.151 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JeruRQ0A6m 00:23:55.151 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:55.151 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:55.151 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:55.151 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JeruRQ0A6m 00:23:55.152 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:55.152 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=768625 00:23:55.152 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:55.152 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:55.152 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 768625 /var/tmp/bdevperf.sock 00:23:55.152 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 768625 ']' 00:23:55.152 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:55.152 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.152 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:55.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:55.152 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.152 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.410 [2024-11-18 07:57:48.256146] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:55.410 [2024-11-18 07:57:48.256218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid768625 ] 00:23:55.410 [2024-11-18 07:57:48.322221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.410 [2024-11-18 07:57:48.367108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.410 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.410 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:55.410 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JeruRQ0A6m 00:23:55.977 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:55.977 [2024-11-18 07:57:49.005570] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:56.242 TLSTESTn1 00:23:56.242 07:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:56.242 Running I/O for 10 seconds... 00:23:58.557 3506.00 IOPS, 13.70 MiB/s [2024-11-18T06:57:52.211Z] 3534.50 IOPS, 13.81 MiB/s [2024-11-18T06:57:53.591Z] 3559.33 IOPS, 13.90 MiB/s [2024-11-18T06:57:54.531Z] 3544.25 IOPS, 13.84 MiB/s [2024-11-18T06:57:55.470Z] 3552.80 IOPS, 13.88 MiB/s [2024-11-18T06:57:56.405Z] 3561.00 IOPS, 13.91 MiB/s [2024-11-18T06:57:57.346Z] 3563.14 IOPS, 13.92 MiB/s [2024-11-18T06:57:58.287Z] 3562.38 IOPS, 13.92 MiB/s [2024-11-18T06:57:59.669Z] 3571.56 IOPS, 13.95 MiB/s [2024-11-18T06:57:59.669Z] 3566.70 IOPS, 13.93 MiB/s 00:24:06.581 Latency(us) 00:24:06.581 [2024-11-18T06:57:59.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.581 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:06.581 Verification LBA range: start 0x0 length 0x2000 00:24:06.581 TLSTESTn1 : 10.02 3572.55 13.96 0.00 0.00 35770.05 6602.15 29515.47 00:24:06.581 [2024-11-18T06:57:59.669Z] =================================================================================================================== 00:24:06.581 [2024-11-18T06:57:59.669Z] Total : 3572.55 13.96 0.00 0.00 35770.05 6602.15 29515.47 00:24:06.581 { 00:24:06.581 "results": [ 00:24:06.581 { 00:24:06.581 "job": "TLSTESTn1", 00:24:06.581 "core_mask": "0x4", 00:24:06.581 "workload": "verify", 00:24:06.581 "status": "finished", 00:24:06.581 "verify_range": { 00:24:06.581 "start": 0, 00:24:06.581 "length": 8192 00:24:06.581 }, 00:24:06.581 "queue_depth": 128, 00:24:06.581 "io_size": 4096, 00:24:06.581 "runtime": 10.019186, 00:24:06.581 "iops": 3572.545713793516, 00:24:06.581 "mibps": 13.955256694505922, 00:24:06.581 "io_failed": 0, 00:24:06.581 "io_timeout": 0, 00:24:06.581 "avg_latency_us": 35770.0458341249, 00:24:06.581 "min_latency_us": 6602.145185185185, 00:24:06.581 "max_latency_us": 29515.472592592592 00:24:06.581 } 00:24:06.581 ], 00:24:06.581 "core_count": 1 00:24:06.581 } 00:24:06.581 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:06.581 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 768625 00:24:06.581 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 768625 ']' 00:24:06.581 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 768625 00:24:06.581 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:06.581 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.581 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 768625 00:24:06.581 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:06.581 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:06.581 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 768625' 00:24:06.581 killing process with pid 768625 00:24:06.581 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 768625 00:24:06.581 Received shutdown signal, test time was about 10.000000 seconds 00:24:06.581 00:24:06.581 Latency(us) 00:24:06.581 [2024-11-18T06:57:59.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.581 [2024-11-18T06:57:59.669Z] =================================================================================================================== 00:24:06.581 [2024-11-18T06:57:59.669Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:06.581 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 768625 00:24:06.581 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.JeruRQ0A6m 00:24:06.581 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JeruRQ0A6m 00:24:06.581 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:06.581 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JeruRQ0A6m 00:24:06.581 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:06.581 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.581 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:06.582 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.582 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JeruRQ0A6m 00:24:06.582 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:06.582 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:06.582 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:06.582 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JeruRQ0A6m 00:24:06.582 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:06.582 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=769941 00:24:06.582 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:06.582 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:06.582 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 769941 /var/tmp/bdevperf.sock 00:24:06.582 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 769941 ']' 00:24:06.582 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:06.582 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.582 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:06.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:06.582 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.582 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.582 [2024-11-18 07:57:59.516395] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:06.582 [2024-11-18 07:57:59.516488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid769941 ] 00:24:06.582 [2024-11-18 07:57:59.583554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.582 [2024-11-18 07:57:59.631075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.841 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.841 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:06.841 07:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JeruRQ0A6m 00:24:07.100 [2024-11-18 07:58:00.010887] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.JeruRQ0A6m': 0100666 00:24:07.100 [2024-11-18 07:58:00.010935] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:07.100 request: 00:24:07.100 { 00:24:07.100 "name": "key0", 00:24:07.100 "path": "/tmp/tmp.JeruRQ0A6m", 00:24:07.100 "method": "keyring_file_add_key", 00:24:07.100 "req_id": 1 00:24:07.100 } 00:24:07.100 Got JSON-RPC error response 00:24:07.100 response: 00:24:07.100 { 00:24:07.100 "code": -1, 00:24:07.100 "message": "Operation not permitted" 00:24:07.100 } 00:24:07.100 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:07.360 [2024-11-18 07:58:00.291786] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:07.360 [2024-11-18 07:58:00.291871] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:07.360 request: 00:24:07.360 { 00:24:07.360 "name": "TLSTEST", 00:24:07.360 "trtype": "tcp", 00:24:07.360 "traddr": "10.0.0.2", 00:24:07.360 "adrfam": "ipv4", 00:24:07.360 "trsvcid": "4420", 00:24:07.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:07.360 "prchk_reftag": false, 00:24:07.360 "prchk_guard": false, 00:24:07.360 "hdgst": false, 00:24:07.360 "ddgst": false, 00:24:07.360 "psk": "key0", 00:24:07.360 "allow_unrecognized_csi": false, 00:24:07.360 "method": "bdev_nvme_attach_controller", 00:24:07.360 "req_id": 1 00:24:07.360 } 00:24:07.360 Got JSON-RPC error response 00:24:07.360 response: 00:24:07.360 { 00:24:07.360 "code": -126, 00:24:07.360 "message": "Required key not available" 00:24:07.360 } 00:24:07.360 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 769941 00:24:07.360 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 769941 ']' 00:24:07.360 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 769941 00:24:07.360 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:07.360 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.360 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 769941 00:24:07.360 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:07.360 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:07.360 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 769941' 00:24:07.360 killing process with pid 769941 00:24:07.360 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 769941 00:24:07.360 Received shutdown signal, test time was about 10.000000 seconds 00:24:07.360 00:24:07.360 Latency(us) 00:24:07.360 [2024-11-18T06:58:00.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.360 [2024-11-18T06:58:00.448Z] =================================================================================================================== 00:24:07.360 [2024-11-18T06:58:00.448Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:07.360 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 769941 00:24:07.620 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:07.620 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:07.620 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:07.620 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:07.620 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:07.620 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 768338 00:24:07.620 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 768338 ']' 00:24:07.620 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 768338 00:24:07.620 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:07.620 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.620 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 768338 00:24:07.620 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:07.620 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:07.620 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 768338' 00:24:07.620 killing process with pid 768338 00:24:07.620 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 768338 00:24:07.620 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 768338 00:24:07.879 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:07.879 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:07.879 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:07.879 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.879 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=770094 00:24:07.879 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:07.879 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 770094 00:24:07.879 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 770094 ']' 00:24:07.879 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.879 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:07.879 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.879 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:07.879 07:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.879 [2024-11-18 07:58:00.832081] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:07.879 [2024-11-18 07:58:00.832163] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.879 [2024-11-18 07:58:00.908182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.879 [2024-11-18 07:58:00.952567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.879 [2024-11-18 07:58:00.952629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.879 [2024-11-18 07:58:00.952648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.879 [2024-11-18 07:58:00.952660] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.879 [2024-11-18 07:58:00.952670] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.879 [2024-11-18 07:58:00.953223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.138 07:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.138 07:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:08.138 07:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:08.138 07:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:08.138 07:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.138 07:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.138 07:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.JeruRQ0A6m 00:24:08.138 07:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:08.138 07:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.JeruRQ0A6m 00:24:08.138 07:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:24:08.138 07:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.138 07:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:24:08.138 07:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.138 07:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.JeruRQ0A6m 00:24:08.138 07:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JeruRQ0A6m 00:24:08.138 07:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:08.396 [2024-11-18 07:58:01.341905] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.396 07:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:08.656 07:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:08.917 [2024-11-18 07:58:01.879353] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:08.917 [2024-11-18 07:58:01.879647] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.917 07:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:09.176 malloc0 00:24:09.176 07:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:09.435 07:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JeruRQ0A6m 00:24:09.694 [2024-11-18 07:58:02.696947] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.JeruRQ0A6m': 0100666 00:24:09.694 [2024-11-18 07:58:02.696991] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:09.694 request: 00:24:09.694 { 00:24:09.694 "name": "key0", 00:24:09.694 "path": "/tmp/tmp.JeruRQ0A6m", 00:24:09.694 "method": "keyring_file_add_key", 00:24:09.694 "req_id": 1 00:24:09.694 } 00:24:09.694 Got JSON-RPC error response 00:24:09.694 response: 00:24:09.694 { 00:24:09.694 "code": -1, 00:24:09.694 "message": "Operation not permitted" 00:24:09.694 } 00:24:09.694 07:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:09.953 [2024-11-18 07:58:02.965694] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:09.953 [2024-11-18 07:58:02.965759] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:09.953 request: 00:24:09.953 { 00:24:09.953 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.953 "host": "nqn.2016-06.io.spdk:host1", 00:24:09.953 "psk": "key0", 00:24:09.953 "method": "nvmf_subsystem_add_host", 00:24:09.953 "req_id": 1 00:24:09.953 } 00:24:09.953 Got JSON-RPC error response 00:24:09.953 response: 00:24:09.953 { 00:24:09.953 "code": -32603, 00:24:09.953 "message": "Internal error" 00:24:09.953 } 00:24:09.953 07:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:09.953 07:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:09.953 07:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:09.953 07:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:09.953 07:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 770094 00:24:09.953 07:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 770094 ']' 00:24:09.953 07:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 770094 00:24:09.953 07:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:09.953 07:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.953 07:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 770094 00:24:09.953 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:09.953 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:09.953 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 770094' 00:24:09.953 killing process with pid 770094 00:24:09.953 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 770094 00:24:09.953 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 770094 00:24:10.212 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.JeruRQ0A6m 00:24:10.212 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:10.212 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:10.212 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:10.212 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.212 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=770389 00:24:10.212 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:10.212 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 770389 00:24:10.212 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 770389 ']' 00:24:10.212 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.212 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.212 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.212 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.212 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.212 [2024-11-18 07:58:03.262480] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:10.212 [2024-11-18 07:58:03.262605] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.471 [2024-11-18 07:58:03.335595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.471 [2024-11-18 07:58:03.375370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.471 [2024-11-18 07:58:03.375435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.471 [2024-11-18 07:58:03.375458] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.471 [2024-11-18 07:58:03.375469] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.471 [2024-11-18 07:58:03.375478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.471 [2024-11-18 07:58:03.376076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.471 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.471 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:10.471 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:10.471 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:10.471 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.471 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.471 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.JeruRQ0A6m 00:24:10.471 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JeruRQ0A6m 00:24:10.471 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:10.730 [2024-11-18 07:58:03.767741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.730 07:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:10.988 07:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:11.247 [2024-11-18 07:58:04.297186] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:11.247 [2024-11-18 07:58:04.297431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.247 07:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:11.506 malloc0 00:24:11.506 07:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:12.074 07:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JeruRQ0A6m 00:24:12.074 07:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:12.332 07:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=770674 00:24:12.332 07:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:12.332 07:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:12.332 07:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 770674 /var/tmp/bdevperf.sock 00:24:12.332 07:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 770674 ']' 00:24:12.332 07:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.332 07:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.332 07:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.332 07:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.332 07:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.591 [2024-11-18 07:58:05.433837] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:12.591 [2024-11-18 07:58:05.433922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid770674 ] 00:24:12.591 [2024-11-18 07:58:05.501063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.591 [2024-11-18 07:58:05.546009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.591 07:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.591 07:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:12.591 07:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JeruRQ0A6m 00:24:12.850 07:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:13.109 [2024-11-18 07:58:06.192970] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:13.367 TLSTESTn1 00:24:13.367 07:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:13.626 07:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:13.626 "subsystems": [ 00:24:13.626 { 00:24:13.626 "subsystem": "keyring", 00:24:13.626 "config": [ 00:24:13.626 { 00:24:13.626 "method": "keyring_file_add_key", 00:24:13.626 "params": { 00:24:13.626 "name": "key0", 00:24:13.626 "path": "/tmp/tmp.JeruRQ0A6m" 00:24:13.626 } 00:24:13.626 } 00:24:13.626 ] 00:24:13.626 }, 00:24:13.626 { 00:24:13.626 "subsystem": "iobuf", 00:24:13.626 "config": [ 00:24:13.626 { 00:24:13.626 "method": "iobuf_set_options", 00:24:13.626 "params": { 00:24:13.626 "small_pool_count": 8192, 00:24:13.626 "large_pool_count": 1024, 00:24:13.626 "small_bufsize": 8192, 00:24:13.626 "large_bufsize": 135168, 00:24:13.626 "enable_numa": false 00:24:13.626 } 00:24:13.626 } 00:24:13.626 ] 00:24:13.626 }, 00:24:13.626 { 00:24:13.626 "subsystem": "sock", 00:24:13.626 "config": [ 00:24:13.626 { 00:24:13.626 "method": "sock_set_default_impl", 00:24:13.626 "params": { 00:24:13.626 "impl_name": "posix" 00:24:13.626 } 00:24:13.626 }, 00:24:13.626 { 00:24:13.626 "method": "sock_impl_set_options", 00:24:13.626 "params": { 00:24:13.626 "impl_name": "ssl", 00:24:13.626 "recv_buf_size": 4096, 00:24:13.626 "send_buf_size": 4096, 00:24:13.626 "enable_recv_pipe": true, 00:24:13.626 "enable_quickack": false, 00:24:13.626 "enable_placement_id": 0, 00:24:13.626 "enable_zerocopy_send_server": true, 00:24:13.626 "enable_zerocopy_send_client": false, 00:24:13.626 "zerocopy_threshold": 0, 00:24:13.626 "tls_version": 0, 00:24:13.626 "enable_ktls": false 00:24:13.626 } 00:24:13.626 }, 00:24:13.626 { 00:24:13.626 "method": "sock_impl_set_options", 00:24:13.626 "params": { 00:24:13.626 "impl_name": "posix", 00:24:13.626 "recv_buf_size": 2097152, 00:24:13.626 "send_buf_size": 2097152, 00:24:13.626 "enable_recv_pipe": true, 00:24:13.626 "enable_quickack": false, 00:24:13.626 "enable_placement_id": 0, 00:24:13.626 "enable_zerocopy_send_server": true, 00:24:13.626 "enable_zerocopy_send_client": false, 00:24:13.626 "zerocopy_threshold": 0, 00:24:13.626 "tls_version": 0, 00:24:13.626 "enable_ktls": false 00:24:13.626 } 00:24:13.626 } 00:24:13.626 ] 00:24:13.626 }, 00:24:13.626 { 00:24:13.626 "subsystem": "vmd", 00:24:13.626 "config": [] 00:24:13.626 }, 00:24:13.626 { 00:24:13.626 "subsystem": "accel", 00:24:13.626 "config": [ 00:24:13.626 { 00:24:13.626 "method": "accel_set_options", 00:24:13.626 "params": { 00:24:13.626 "small_cache_size": 128, 00:24:13.626 "large_cache_size": 16, 00:24:13.626 "task_count": 2048, 00:24:13.626 "sequence_count": 2048, 00:24:13.626 "buf_count": 2048 00:24:13.626 } 00:24:13.626 } 00:24:13.626 ] 00:24:13.626 }, 00:24:13.626 { 00:24:13.626 "subsystem": "bdev", 00:24:13.626 "config": [ 00:24:13.626 { 00:24:13.626 "method": "bdev_set_options", 00:24:13.626 "params": { 00:24:13.626 "bdev_io_pool_size": 65535, 00:24:13.626 "bdev_io_cache_size": 256, 00:24:13.626 "bdev_auto_examine": true, 00:24:13.626 "iobuf_small_cache_size": 128, 00:24:13.626 "iobuf_large_cache_size": 16 00:24:13.626 } 00:24:13.626 }, 00:24:13.626 { 00:24:13.626 "method": "bdev_raid_set_options", 00:24:13.626 "params": { 00:24:13.626 "process_window_size_kb": 1024, 00:24:13.626 "process_max_bandwidth_mb_sec": 0 00:24:13.626 } 00:24:13.626 }, 00:24:13.626 { 00:24:13.626 "method": "bdev_iscsi_set_options", 00:24:13.626 "params": { 00:24:13.626 "timeout_sec": 30 00:24:13.626 } 00:24:13.626 }, 00:24:13.626 { 00:24:13.626 "method": "bdev_nvme_set_options", 00:24:13.626 "params": { 00:24:13.626 "action_on_timeout": "none", 00:24:13.626 "timeout_us": 0, 00:24:13.626 "timeout_admin_us": 0, 00:24:13.626 "keep_alive_timeout_ms": 10000, 00:24:13.626 "arbitration_burst": 0, 00:24:13.626 "low_priority_weight": 0, 00:24:13.626 "medium_priority_weight": 0, 00:24:13.626 "high_priority_weight": 0, 00:24:13.627 "nvme_adminq_poll_period_us": 10000, 00:24:13.627 "nvme_ioq_poll_period_us": 0, 00:24:13.627 "io_queue_requests": 0, 00:24:13.627 "delay_cmd_submit": true, 00:24:13.627 "transport_retry_count": 4, 00:24:13.627 "bdev_retry_count": 3, 00:24:13.627 "transport_ack_timeout": 0, 00:24:13.627 "ctrlr_loss_timeout_sec": 0, 00:24:13.627 "reconnect_delay_sec": 0, 00:24:13.627 "fast_io_fail_timeout_sec": 0, 00:24:13.627 "disable_auto_failback": false, 00:24:13.627 "generate_uuids": false, 00:24:13.627 "transport_tos": 0, 00:24:13.627 "nvme_error_stat": false, 00:24:13.627 "rdma_srq_size": 0, 00:24:13.627 "io_path_stat": false, 00:24:13.627 "allow_accel_sequence": false, 00:24:13.627 "rdma_max_cq_size": 0, 00:24:13.627 "rdma_cm_event_timeout_ms": 0, 00:24:13.627 "dhchap_digests": [ 00:24:13.627 "sha256", 00:24:13.627 "sha384", 00:24:13.627 "sha512" 00:24:13.627 ], 00:24:13.627 "dhchap_dhgroups": [ 00:24:13.627 "null", 00:24:13.627 "ffdhe2048", 00:24:13.627 "ffdhe3072", 00:24:13.627 "ffdhe4096", 00:24:13.627 "ffdhe6144", 00:24:13.627 "ffdhe8192" 00:24:13.627 ] 00:24:13.627 } 00:24:13.627 }, 00:24:13.627 { 00:24:13.627 "method": "bdev_nvme_set_hotplug", 00:24:13.627 "params": { 00:24:13.627 "period_us": 100000, 00:24:13.627 "enable": false 00:24:13.627 } 00:24:13.627 }, 00:24:13.627 { 00:24:13.627 "method": "bdev_malloc_create", 00:24:13.627 "params": { 00:24:13.627 "name": "malloc0", 00:24:13.627 "num_blocks": 8192, 00:24:13.627 "block_size": 4096, 00:24:13.627 "physical_block_size": 4096, 00:24:13.627 "uuid": "b54357bb-192d-4766-97dd-2a7df01a868a", 00:24:13.627 "optimal_io_boundary": 0, 00:24:13.627 "md_size": 0, 00:24:13.627 "dif_type": 0, 00:24:13.627 "dif_is_head_of_md": false, 00:24:13.627 "dif_pi_format": 0 00:24:13.627 } 00:24:13.627 }, 00:24:13.627 { 00:24:13.627 "method": "bdev_wait_for_examine" 00:24:13.627 } 00:24:13.627 ] 00:24:13.627 }, 00:24:13.627 { 00:24:13.627 "subsystem": "nbd", 00:24:13.627 "config": [] 00:24:13.627 }, 00:24:13.627 { 00:24:13.627 "subsystem": "scheduler", 00:24:13.627 "config": [ 00:24:13.627 { 00:24:13.627 "method": "framework_set_scheduler", 00:24:13.627 "params": { 00:24:13.627 "name": "static" 00:24:13.627 } 00:24:13.627 } 00:24:13.627 ] 00:24:13.627 }, 00:24:13.627 { 00:24:13.627 "subsystem": "nvmf", 00:24:13.627 "config": [ 00:24:13.627 { 00:24:13.627 "method": "nvmf_set_config", 00:24:13.627 "params": { 00:24:13.627 "discovery_filter": "match_any", 00:24:13.627 "admin_cmd_passthru": { 00:24:13.627 "identify_ctrlr": false 00:24:13.627 }, 00:24:13.627 "dhchap_digests": [ 00:24:13.627 "sha256", 00:24:13.627 "sha384", 00:24:13.627 "sha512" 00:24:13.627 ], 00:24:13.627 "dhchap_dhgroups": [ 00:24:13.627 "null", 00:24:13.627 "ffdhe2048", 00:24:13.627 "ffdhe3072", 00:24:13.627 "ffdhe4096", 00:24:13.627 "ffdhe6144", 00:24:13.627 "ffdhe8192" 00:24:13.627 ] 00:24:13.627 } 00:24:13.627 }, 00:24:13.627 { 00:24:13.627 "method": "nvmf_set_max_subsystems", 00:24:13.627 "params": { 00:24:13.627 "max_subsystems": 1024 00:24:13.627 } 00:24:13.627 }, 00:24:13.627 { 00:24:13.627 "method": "nvmf_set_crdt", 00:24:13.627 "params": { 00:24:13.627 "crdt1": 0, 00:24:13.627 "crdt2": 0, 00:24:13.627 "crdt3": 0 00:24:13.627 } 00:24:13.627 }, 00:24:13.627 { 00:24:13.627 "method": "nvmf_create_transport", 00:24:13.627 "params": { 00:24:13.627 "trtype": "TCP", 00:24:13.627 "max_queue_depth": 128, 00:24:13.627 "max_io_qpairs_per_ctrlr": 127, 00:24:13.627 "in_capsule_data_size": 4096, 00:24:13.627 "max_io_size": 131072, 00:24:13.627 "io_unit_size": 131072, 00:24:13.627 "max_aq_depth": 128, 00:24:13.627 "num_shared_buffers": 511, 00:24:13.627 "buf_cache_size": 4294967295, 00:24:13.627 "dif_insert_or_strip": false, 00:24:13.627 "zcopy": false, 00:24:13.627 "c2h_success": false, 00:24:13.627 "sock_priority": 0, 00:24:13.627 "abort_timeout_sec": 1, 00:24:13.627 "ack_timeout": 0, 00:24:13.627 "data_wr_pool_size": 0 00:24:13.627 } 00:24:13.627 }, 00:24:13.627 { 00:24:13.627 "method": "nvmf_create_subsystem", 00:24:13.627 "params": { 00:24:13.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.627 "allow_any_host": false, 00:24:13.627 "serial_number": "SPDK00000000000001", 00:24:13.627 "model_number": "SPDK bdev Controller", 00:24:13.627 "max_namespaces": 10, 00:24:13.627 "min_cntlid": 1, 00:24:13.627 "max_cntlid": 65519, 00:24:13.627 "ana_reporting": false 00:24:13.627 } 00:24:13.627 }, 00:24:13.627 { 00:24:13.627 "method": "nvmf_subsystem_add_host", 00:24:13.627 "params": { 00:24:13.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.627 "host": "nqn.2016-06.io.spdk:host1", 00:24:13.627 "psk": "key0" 00:24:13.627 } 00:24:13.627 }, 00:24:13.627 { 00:24:13.627 "method": "nvmf_subsystem_add_ns", 00:24:13.627 "params": { 00:24:13.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.627 "namespace": { 00:24:13.627 "nsid": 1, 00:24:13.627 "bdev_name": "malloc0", 00:24:13.627 "nguid": "B54357BB192D476697DD2A7DF01A868A", 00:24:13.627 "uuid": "b54357bb-192d-4766-97dd-2a7df01a868a", 00:24:13.627 "no_auto_visible": false 00:24:13.627 } 00:24:13.627 } 00:24:13.627 }, 00:24:13.627 { 00:24:13.627 "method": "nvmf_subsystem_add_listener", 00:24:13.627 "params": { 00:24:13.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.627 "listen_address": { 00:24:13.627 "trtype": "TCP", 00:24:13.627 "adrfam": "IPv4", 00:24:13.627 "traddr": "10.0.0.2", 00:24:13.627 "trsvcid": "4420" 00:24:13.627 }, 00:24:13.627 "secure_channel": true 00:24:13.627 } 00:24:13.627 } 00:24:13.627 ] 00:24:13.627 } 00:24:13.627 ] 00:24:13.627 }' 00:24:13.627 07:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:13.886 07:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:13.886 "subsystems": [ 00:24:13.886 { 00:24:13.886 "subsystem": "keyring", 00:24:13.886 "config": [ 00:24:13.886 { 00:24:13.886 "method": "keyring_file_add_key", 00:24:13.886 "params": { 00:24:13.886 "name": "key0", 00:24:13.886 "path": "/tmp/tmp.JeruRQ0A6m" 00:24:13.886 } 00:24:13.886 } 00:24:13.886 ] 00:24:13.886 }, 00:24:13.886 { 00:24:13.886 "subsystem": "iobuf", 00:24:13.886 "config": [ 00:24:13.886 { 00:24:13.886 "method": "iobuf_set_options", 00:24:13.886 "params": { 00:24:13.886 "small_pool_count": 8192, 00:24:13.886 "large_pool_count": 1024, 00:24:13.886 "small_bufsize": 8192, 00:24:13.886 "large_bufsize": 135168, 00:24:13.886 "enable_numa": false 00:24:13.886 } 00:24:13.886 } 00:24:13.886 ] 00:24:13.886 }, 00:24:13.886 { 00:24:13.886 "subsystem": "sock", 00:24:13.886 "config": [ 00:24:13.886 { 00:24:13.886 "method": "sock_set_default_impl", 00:24:13.886 "params": { 00:24:13.886 "impl_name": "posix" 00:24:13.886 } 00:24:13.886 }, 00:24:13.886 { 00:24:13.886 "method": "sock_impl_set_options", 00:24:13.886 "params": { 00:24:13.886 "impl_name": "ssl", 00:24:13.886 "recv_buf_size": 4096, 00:24:13.886 "send_buf_size": 4096, 00:24:13.886 "enable_recv_pipe": true, 00:24:13.886 "enable_quickack": false, 00:24:13.886 "enable_placement_id": 0, 00:24:13.886 "enable_zerocopy_send_server": true, 00:24:13.886 "enable_zerocopy_send_client": false, 00:24:13.886 "zerocopy_threshold": 0, 00:24:13.886 "tls_version": 0, 00:24:13.886 "enable_ktls": false 00:24:13.886 } 00:24:13.886 }, 00:24:13.886 { 00:24:13.886 "method": "sock_impl_set_options", 00:24:13.886 "params": { 00:24:13.886 "impl_name": "posix", 00:24:13.886 "recv_buf_size": 2097152, 00:24:13.886 "send_buf_size": 2097152, 00:24:13.886 "enable_recv_pipe": true, 00:24:13.886 "enable_quickack": false, 00:24:13.886 "enable_placement_id": 0, 00:24:13.886 "enable_zerocopy_send_server": true, 00:24:13.887 "enable_zerocopy_send_client": false, 00:24:13.887 "zerocopy_threshold": 0, 00:24:13.887 "tls_version": 0, 00:24:13.887 "enable_ktls": false 00:24:13.887 } 00:24:13.887 } 00:24:13.887 ] 00:24:13.887 }, 00:24:13.887 { 00:24:13.887 "subsystem": "vmd", 00:24:13.887 "config": [] 00:24:13.887 }, 00:24:13.887 { 00:24:13.887 "subsystem": "accel", 00:24:13.887 "config": [ 00:24:13.887 { 00:24:13.887 "method": "accel_set_options", 00:24:13.887 "params": { 00:24:13.887 "small_cache_size": 128, 00:24:13.887 "large_cache_size": 16, 00:24:13.887 "task_count": 2048, 00:24:13.887 "sequence_count": 2048, 00:24:13.887 "buf_count": 2048 00:24:13.887 } 00:24:13.887 } 00:24:13.887 ] 00:24:13.887 }, 00:24:13.887 { 00:24:13.887 "subsystem": "bdev", 00:24:13.887 "config": [ 00:24:13.887 { 00:24:13.887 "method": "bdev_set_options", 00:24:13.887 "params": { 00:24:13.887 "bdev_io_pool_size": 65535, 00:24:13.887 "bdev_io_cache_size": 256, 00:24:13.887 "bdev_auto_examine": true, 00:24:13.887 "iobuf_small_cache_size": 128, 00:24:13.887 "iobuf_large_cache_size": 16 00:24:13.887 } 00:24:13.887 }, 00:24:13.887 { 00:24:13.887 "method": "bdev_raid_set_options", 00:24:13.887 "params": { 00:24:13.887 "process_window_size_kb": 1024, 00:24:13.887 "process_max_bandwidth_mb_sec": 0 00:24:13.887 } 00:24:13.887 }, 00:24:13.887 { 00:24:13.887 "method": "bdev_iscsi_set_options", 00:24:13.887 "params": { 00:24:13.887 "timeout_sec": 30 00:24:13.887 } 00:24:13.887 }, 00:24:13.887 { 00:24:13.887 "method": "bdev_nvme_set_options", 00:24:13.887 "params": { 00:24:13.887 "action_on_timeout": "none", 00:24:13.887 "timeout_us": 0, 00:24:13.887 "timeout_admin_us": 0, 00:24:13.887 "keep_alive_timeout_ms": 10000, 00:24:13.887 "arbitration_burst": 0, 00:24:13.887 "low_priority_weight": 0, 00:24:13.887 "medium_priority_weight": 0, 00:24:13.887 "high_priority_weight": 0, 00:24:13.887 "nvme_adminq_poll_period_us": 10000, 00:24:13.887 "nvme_ioq_poll_period_us": 0, 00:24:13.887 "io_queue_requests": 512, 00:24:13.887 "delay_cmd_submit": true, 00:24:13.887 "transport_retry_count": 4, 00:24:13.887 "bdev_retry_count": 3, 00:24:13.887 "transport_ack_timeout": 0, 00:24:13.887 "ctrlr_loss_timeout_sec": 0, 00:24:13.887 "reconnect_delay_sec": 0, 00:24:13.887 "fast_io_fail_timeout_sec": 0, 00:24:13.887 "disable_auto_failback": false, 00:24:13.887 "generate_uuids": false, 00:24:13.887 "transport_tos": 0, 00:24:13.887 "nvme_error_stat": false, 00:24:13.887 "rdma_srq_size": 0, 00:24:13.887 "io_path_stat": false, 00:24:13.887 "allow_accel_sequence": false, 00:24:13.887 "rdma_max_cq_size": 0, 00:24:13.887 "rdma_cm_event_timeout_ms": 0, 00:24:13.887 "dhchap_digests": [ 00:24:13.887 "sha256", 00:24:13.887 "sha384", 00:24:13.887 "sha512" 00:24:13.887 ], 00:24:13.887 "dhchap_dhgroups": [ 00:24:13.887 "null", 00:24:13.887 "ffdhe2048", 00:24:13.887 "ffdhe3072", 00:24:13.887 "ffdhe4096", 00:24:13.887 "ffdhe6144", 00:24:13.887 "ffdhe8192" 00:24:13.887 ] 00:24:13.887 } 00:24:13.887 }, 00:24:13.887 { 00:24:13.887 "method": "bdev_nvme_attach_controller", 00:24:13.887 "params": { 00:24:13.887 "name": "TLSTEST", 00:24:13.887 "trtype": "TCP", 00:24:13.887 "adrfam": "IPv4", 00:24:13.887 "traddr": "10.0.0.2", 00:24:13.887 "trsvcid": "4420", 00:24:13.887 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.887 "prchk_reftag": false, 00:24:13.887 "prchk_guard": false, 00:24:13.887 "ctrlr_loss_timeout_sec": 0, 00:24:13.887 "reconnect_delay_sec": 0, 00:24:13.887 "fast_io_fail_timeout_sec": 0, 00:24:13.887 "psk": "key0", 00:24:13.887 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:13.887 "hdgst": false, 00:24:13.887 "ddgst": false, 00:24:13.887 "multipath": "multipath" 00:24:13.887 } 00:24:13.887 }, 00:24:13.887 { 00:24:13.887 "method": "bdev_nvme_set_hotplug", 00:24:13.887 "params": { 00:24:13.887 "period_us": 100000, 00:24:13.887 "enable": false 00:24:13.887 } 00:24:13.887 }, 00:24:13.887 { 00:24:13.887 "method": "bdev_wait_for_examine" 00:24:13.887 } 00:24:13.887 ] 00:24:13.887 }, 00:24:13.887 { 00:24:13.887 "subsystem": "nbd", 00:24:13.887 "config": [] 00:24:13.887 } 00:24:13.887 ] 00:24:13.887 }' 00:24:13.887 07:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 770674 00:24:13.887 07:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 770674 ']' 00:24:13.887 07:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 770674 00:24:13.887 07:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:13.887 07:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.887 07:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 770674 00:24:14.147 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:14.147 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:14.147 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 770674' 00:24:14.147 killing process with pid 770674 00:24:14.147 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 770674 00:24:14.147 Received shutdown signal, test time was about 10.000000 seconds 00:24:14.147 00:24:14.147 Latency(us) 00:24:14.147 [2024-11-18T06:58:07.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.147 [2024-11-18T06:58:07.235Z] =================================================================================================================== 00:24:14.147 [2024-11-18T06:58:07.235Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:14.147 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 770674 00:24:14.147 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 770389 00:24:14.147 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 770389 ']' 00:24:14.147 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 770389 00:24:14.147 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:14.147 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.147 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 770389 00:24:14.147 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:14.147 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:14.147 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 770389' 00:24:14.147 killing process with pid 770389 00:24:14.147 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 770389 00:24:14.147 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 770389 00:24:14.412 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:14.412 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:14.412 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.412 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:14.412 "subsystems": [ 00:24:14.412 { 00:24:14.412 "subsystem": "keyring", 00:24:14.412 "config": [ 00:24:14.412 { 00:24:14.412 "method": "keyring_file_add_key", 00:24:14.412 "params": { 00:24:14.412 "name": "key0", 00:24:14.412 "path": "/tmp/tmp.JeruRQ0A6m" 00:24:14.412 } 00:24:14.412 } 00:24:14.412 ] 00:24:14.412 }, 00:24:14.412 { 00:24:14.412 "subsystem": "iobuf", 00:24:14.412 "config": [ 00:24:14.412 { 00:24:14.412 "method": "iobuf_set_options", 00:24:14.412 "params": { 00:24:14.412 "small_pool_count": 8192, 00:24:14.412 "large_pool_count": 1024, 00:24:14.412 "small_bufsize": 8192, 00:24:14.412 "large_bufsize": 135168, 00:24:14.412 "enable_numa": false 00:24:14.412 } 00:24:14.412 } 00:24:14.412 ] 00:24:14.412 }, 00:24:14.412 { 00:24:14.412 "subsystem": "sock", 00:24:14.412 "config": [ 00:24:14.412 { 00:24:14.412 "method": "sock_set_default_impl", 00:24:14.412 "params": { 00:24:14.412 "impl_name": "posix" 00:24:14.412 } 00:24:14.412 }, 00:24:14.412 { 00:24:14.412 "method": "sock_impl_set_options", 00:24:14.412 "params": { 00:24:14.412 "impl_name": "ssl", 00:24:14.412 "recv_buf_size": 4096, 00:24:14.412 "send_buf_size": 4096, 00:24:14.412 "enable_recv_pipe": true, 00:24:14.412 "enable_quickack": false, 00:24:14.412 "enable_placement_id": 0, 00:24:14.412 "enable_zerocopy_send_server": true, 00:24:14.412 "enable_zerocopy_send_client": false, 00:24:14.412 "zerocopy_threshold": 0, 00:24:14.412 "tls_version": 0, 00:24:14.412 "enable_ktls": false 00:24:14.412 } 00:24:14.412 }, 00:24:14.412 { 00:24:14.412 "method": "sock_impl_set_options", 00:24:14.412 "params": { 00:24:14.412 "impl_name": "posix", 00:24:14.412 "recv_buf_size": 2097152, 00:24:14.412 "send_buf_size": 2097152, 00:24:14.412 "enable_recv_pipe": true, 00:24:14.412 "enable_quickack": false, 00:24:14.413 "enable_placement_id": 0, 00:24:14.413 "enable_zerocopy_send_server": true, 00:24:14.413 "enable_zerocopy_send_client": false, 00:24:14.413 "zerocopy_threshold": 0, 00:24:14.413 "tls_version": 0, 00:24:14.413 "enable_ktls": false 00:24:14.413 } 00:24:14.413 } 00:24:14.413 ] 00:24:14.413 }, 00:24:14.413 { 00:24:14.413 "subsystem": "vmd", 00:24:14.413 "config": [] 00:24:14.413 }, 00:24:14.413 { 00:24:14.413 "subsystem": "accel", 00:24:14.413 "config": [ 00:24:14.413 { 00:24:14.413 "method": "accel_set_options", 00:24:14.413 "params": { 00:24:14.413 "small_cache_size": 128, 00:24:14.413 "large_cache_size": 16, 00:24:14.413 "task_count": 2048, 00:24:14.413 "sequence_count": 2048, 00:24:14.413 "buf_count": 2048 00:24:14.413 } 00:24:14.413 } 00:24:14.413 ] 00:24:14.413 }, 00:24:14.413 { 00:24:14.413 "subsystem": "bdev", 00:24:14.413 "config": [ 00:24:14.413 { 00:24:14.413 "method": "bdev_set_options", 00:24:14.413 "params": { 00:24:14.413 "bdev_io_pool_size": 65535, 00:24:14.413 "bdev_io_cache_size": 256, 00:24:14.413 "bdev_auto_examine": true, 00:24:14.413 "iobuf_small_cache_size": 128, 00:24:14.413 "iobuf_large_cache_size": 16 00:24:14.413 } 00:24:14.413 }, 00:24:14.413 { 00:24:14.413 "method": "bdev_raid_set_options", 00:24:14.413 "params": { 00:24:14.413 "process_window_size_kb": 1024, 00:24:14.413 "process_max_bandwidth_mb_sec": 0 00:24:14.413 } 00:24:14.413 }, 00:24:14.413 { 00:24:14.413 "method": "bdev_iscsi_set_options", 00:24:14.413 "params": { 00:24:14.413 "timeout_sec": 30 00:24:14.413 } 00:24:14.413 }, 00:24:14.413 { 00:24:14.413 "method": "bdev_nvme_set_options", 00:24:14.413 "params": { 00:24:14.413 "action_on_timeout": "none", 00:24:14.413 "timeout_us": 0, 00:24:14.413 "timeout_admin_us": 0, 00:24:14.413 "keep_alive_timeout_ms": 10000, 00:24:14.413 "arbitration_burst": 0, 00:24:14.413 "low_priority_weight": 0, 00:24:14.413 "medium_priority_weight": 0, 00:24:14.413 "high_priority_weight": 0, 00:24:14.413 "nvme_adminq_poll_period_us": 10000, 00:24:14.413 "nvme_ioq_poll_period_us": 0, 00:24:14.413 "io_queue_requests": 0, 00:24:14.413 "delay_cmd_submit": true, 00:24:14.413 "transport_retry_count": 4, 00:24:14.413 "bdev_retry_count": 3, 00:24:14.413 "transport_ack_timeout": 0, 00:24:14.413 "ctrlr_loss_timeout_sec": 0, 00:24:14.413 "reconnect_delay_sec": 0, 00:24:14.413 "fast_io_fail_timeout_sec": 0, 00:24:14.413 "disable_auto_failback": false, 00:24:14.413 "generate_uuids": false, 00:24:14.413 "transport_tos": 0, 00:24:14.413 "nvme_error_stat": false, 00:24:14.413 "rdma_srq_size": 0, 00:24:14.413 "io_path_stat": false, 00:24:14.413 "allow_accel_sequence": false, 00:24:14.413 "rdma_max_cq_size": 0, 00:24:14.413 "rdma_cm_event_timeout_ms": 0, 00:24:14.413 "dhchap_digests": [ 00:24:14.413 "sha256", 00:24:14.413 "sha384", 00:24:14.413 "sha512" 00:24:14.413 ], 00:24:14.413 "dhchap_dhgroups": [ 00:24:14.413 "null", 00:24:14.413 "ffdhe2048", 00:24:14.413 "ffdhe3072", 00:24:14.413 "ffdhe4096", 00:24:14.413 "ffdhe6144", 00:24:14.413 "ffdhe8192" 00:24:14.413 ] 00:24:14.413 } 00:24:14.413 }, 00:24:14.413 { 00:24:14.413 "method": "bdev_nvme_set_hotplug", 00:24:14.413 "params": { 00:24:14.413 "period_us": 100000, 00:24:14.413 "enable": false 00:24:14.413 } 00:24:14.413 }, 00:24:14.413 { 00:24:14.413 "method": "bdev_malloc_create", 00:24:14.413 "params": { 00:24:14.413 "name": "malloc0", 00:24:14.413 "num_blocks": 8192, 00:24:14.413 "block_size": 4096, 00:24:14.413 "physical_block_size": 4096, 00:24:14.413 "uuid": "b54357bb-192d-4766-97dd-2a7df01a868a", 00:24:14.413 "optimal_io_boundary": 0, 00:24:14.413 "md_size": 0, 00:24:14.413 "dif_type": 0, 00:24:14.413 "dif_is_head_of_md": false, 00:24:14.413 "dif_pi_format": 0 00:24:14.413 } 00:24:14.413 }, 00:24:14.413 { 00:24:14.413 "method": "bdev_wait_for_examine" 00:24:14.413 } 00:24:14.413 ] 00:24:14.413 }, 00:24:14.413 { 00:24:14.413 "subsystem": "nbd", 00:24:14.413 "config": [] 00:24:14.413 }, 00:24:14.413 { 00:24:14.413 "subsystem": "scheduler", 00:24:14.413 "config": [ 00:24:14.413 { 00:24:14.413 "method": "framework_set_scheduler", 00:24:14.413 "params": { 00:24:14.413 "name": "static" 00:24:14.413 } 00:24:14.413 } 00:24:14.413 ] 00:24:14.413 }, 00:24:14.413 { 00:24:14.413 "subsystem": "nvmf", 00:24:14.413 "config": [ 00:24:14.413 { 00:24:14.413 "method": "nvmf_set_config", 00:24:14.413 "params": { 00:24:14.413 "discovery_filter": "match_any", 00:24:14.413 "admin_cmd_passthru": { 00:24:14.413 "identify_ctrlr": false 00:24:14.413 }, 00:24:14.413 "dhchap_digests": [ 00:24:14.413 "sha256", 00:24:14.413 "sha384", 00:24:14.413 "sha512" 00:24:14.413 ], 00:24:14.413 "dhchap_dhgroups": [ 00:24:14.413 "null", 00:24:14.413 "ffdhe2048", 00:24:14.413 "ffdhe3072", 00:24:14.413 "ffdhe4096", 00:24:14.413 "ffdhe6144", 00:24:14.413 "ffdhe8192" 00:24:14.413 ] 00:24:14.413 } 00:24:14.413 }, 00:24:14.413 { 00:24:14.413 "method": "nvmf_set_max_subsystems", 00:24:14.413 "params": { 00:24:14.413 "max_subsystems": 1024 00:24:14.413 } 00:24:14.413 }, 00:24:14.413 { 00:24:14.413 "method": "nvmf_set_crdt", 00:24:14.413 "params": { 00:24:14.414 "crdt1": 0, 00:24:14.414 "crdt2": 0, 00:24:14.414 "crdt3": 0 00:24:14.414 } 00:24:14.414 }, 00:24:14.414 { 00:24:14.414 "method": "nvmf_create_transport", 00:24:14.414 "params": { 00:24:14.414 "trtype": "TCP", 00:24:14.414 "max_queue_depth": 128, 00:24:14.414 "max_io_qpairs_per_ctrlr": 127, 00:24:14.414 "in_capsule_data_size": 4096, 00:24:14.414 "max_io_size": 131072, 00:24:14.414 "io_unit_size": 131072, 00:24:14.414 "max_aq_depth": 128, 00:24:14.414 "num_shared_buffers": 511, 00:24:14.414 "buf_cache_size": 4294967295, 00:24:14.414 "dif_insert_or_strip": false, 00:24:14.414 "zcopy": false, 00:24:14.414 "c2h_success": false, 00:24:14.414 "sock_priority": 0, 00:24:14.414 "abort_timeout_sec": 1, 00:24:14.414 "ack_timeout": 0, 00:24:14.414 "data_wr_pool_size": 0 00:24:14.414 } 00:24:14.414 }, 00:24:14.414 { 00:24:14.414 "method": "nvmf_create_subsystem", 00:24:14.414 "params": { 00:24:14.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.414 "allow_any_host": false, 00:24:14.414 "serial_number": "SPDK00000000000001", 00:24:14.414 "model_number": "SPDK bdev Controller", 00:24:14.414 "max_namespaces": 10, 00:24:14.414 "min_cntlid": 1, 00:24:14.414 "max_cntlid": 65519, 00:24:14.414 "ana_reporting": false 00:24:14.414 } 00:24:14.414 }, 00:24:14.414 { 00:24:14.414 "method": "nvmf_subsystem_add_host", 00:24:14.414 "params": { 00:24:14.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.414 "host": "nqn.2016-06.io.spdk:host1", 00:24:14.414 "psk": "key0" 00:24:14.414 } 00:24:14.414 }, 00:24:14.414 { 00:24:14.414 "method": "nvmf_subsystem_add_ns", 00:24:14.414 "params": { 00:24:14.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.414 "namespace": { 00:24:14.414 "nsid": 1, 00:24:14.414 "bdev_name": "malloc0", 00:24:14.414 "nguid": "B54357BB192D476697DD2A7DF01A868A", 00:24:14.414 "uuid": "b54357bb-192d-4766-97dd-2a7df01a868a", 00:24:14.414 "no_auto_visible": false 00:24:14.414 } 00:24:14.414 } 00:24:14.414 }, 00:24:14.414 { 00:24:14.414 "method": "nvmf_subsystem_add_listener", 00:24:14.414 "params": { 00:24:14.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.414 "listen_address": { 00:24:14.414 "trtype": "TCP", 00:24:14.414 "adrfam": "IPv4", 00:24:14.414 "traddr": "10.0.0.2", 00:24:14.414 "trsvcid": "4420" 00:24:14.414 }, 00:24:14.414 "secure_channel": true 00:24:14.414 } 00:24:14.414 } 00:24:14.414 ] 00:24:14.414 } 00:24:14.414 ] 00:24:14.414 }' 00:24:14.414 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.414 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=770953 00:24:14.414 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:14.414 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 770953 00:24:14.414 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 770953 ']' 00:24:14.414 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.414 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:14.414 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.414 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:14.414 07:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.713 [2024-11-18 07:58:07.499831] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:14.713 [2024-11-18 07:58:07.499939] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.713 [2024-11-18 07:58:07.572412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.713 [2024-11-18 07:58:07.612824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.713 [2024-11-18 07:58:07.612877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.713 [2024-11-18 07:58:07.612900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.713 [2024-11-18 07:58:07.612910] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.713 [2024-11-18 07:58:07.612919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.713 [2024-11-18 07:58:07.613486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.988 [2024-11-18 07:58:07.856892] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.988 [2024-11-18 07:58:07.888913] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:14.988 [2024-11-18 07:58:07.889157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.583 07:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.583 07:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:15.584 07:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:15.584 07:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:15.584 07:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.584 07:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.584 07:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=771110 00:24:15.584 07:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 771110 /var/tmp/bdevperf.sock 00:24:15.584 07:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 771110 ']' 00:24:15.584 07:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:15.584 07:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:15.584 07:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:15.584 07:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:15.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:15.584 07:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:15.584 "subsystems": [ 00:24:15.584 { 00:24:15.584 "subsystem": "keyring", 00:24:15.584 "config": [ 00:24:15.584 { 00:24:15.584 "method": "keyring_file_add_key", 00:24:15.584 "params": { 00:24:15.584 "name": "key0", 00:24:15.584 "path": "/tmp/tmp.JeruRQ0A6m" 00:24:15.584 } 00:24:15.584 } 00:24:15.584 ] 00:24:15.584 }, 00:24:15.584 { 00:24:15.584 "subsystem": "iobuf", 00:24:15.584 "config": [ 00:24:15.584 { 00:24:15.584 "method": "iobuf_set_options", 00:24:15.584 "params": { 00:24:15.584 "small_pool_count": 8192, 00:24:15.584 "large_pool_count": 1024, 00:24:15.584 "small_bufsize": 8192, 00:24:15.584 "large_bufsize": 135168, 00:24:15.584 "enable_numa": false 00:24:15.584 } 00:24:15.584 } 00:24:15.584 ] 00:24:15.584 }, 00:24:15.584 { 00:24:15.584 "subsystem": "sock", 00:24:15.584 "config": [ 00:24:15.584 { 00:24:15.584 "method": "sock_set_default_impl", 00:24:15.584 "params": { 00:24:15.584 "impl_name": "posix" 00:24:15.584 } 00:24:15.584 }, 00:24:15.584 { 00:24:15.584 "method": "sock_impl_set_options", 00:24:15.584 "params": { 00:24:15.584 "impl_name": "ssl", 00:24:15.584 "recv_buf_size": 4096, 00:24:15.584 "send_buf_size": 4096, 00:24:15.584 "enable_recv_pipe": true, 00:24:15.584 "enable_quickack": false, 00:24:15.584 "enable_placement_id": 0, 00:24:15.584 "enable_zerocopy_send_server": true, 00:24:15.584 "enable_zerocopy_send_client": false, 00:24:15.584 "zerocopy_threshold": 0, 00:24:15.584 "tls_version": 0, 00:24:15.584 "enable_ktls": false 00:24:15.584 } 00:24:15.584 }, 00:24:15.584 { 00:24:15.584 "method": "sock_impl_set_options", 00:24:15.584 "params": { 00:24:15.584 "impl_name": "posix", 00:24:15.584 "recv_buf_size": 2097152, 00:24:15.584 "send_buf_size": 2097152, 00:24:15.584 "enable_recv_pipe": true, 00:24:15.584 "enable_quickack": false, 00:24:15.584 "enable_placement_id": 0, 00:24:15.584 "enable_zerocopy_send_server": true, 00:24:15.584 "enable_zerocopy_send_client": false, 00:24:15.584 "zerocopy_threshold": 0, 00:24:15.584 "tls_version": 0, 00:24:15.584 "enable_ktls": false 00:24:15.584 } 00:24:15.584 } 00:24:15.584 ] 00:24:15.584 }, 00:24:15.584 { 00:24:15.584 "subsystem": "vmd", 00:24:15.584 "config": [] 00:24:15.584 }, 00:24:15.584 { 00:24:15.584 "subsystem": "accel", 00:24:15.584 "config": [ 00:24:15.584 { 00:24:15.584 "method": "accel_set_options", 00:24:15.584 "params": { 00:24:15.584 "small_cache_size": 128, 00:24:15.584 "large_cache_size": 16, 00:24:15.584 "task_count": 2048, 00:24:15.584 "sequence_count": 2048, 00:24:15.584 "buf_count": 2048 00:24:15.584 } 00:24:15.584 } 00:24:15.584 ] 00:24:15.584 }, 00:24:15.584 { 00:24:15.584 "subsystem": "bdev", 00:24:15.584 "config": [ 00:24:15.584 { 00:24:15.584 "method": "bdev_set_options", 00:24:15.584 "params": { 00:24:15.584 "bdev_io_pool_size": 65535, 00:24:15.584 "bdev_io_cache_size": 256, 00:24:15.584 "bdev_auto_examine": true, 00:24:15.584 "iobuf_small_cache_size": 128, 00:24:15.584 "iobuf_large_cache_size": 16 00:24:15.584 } 00:24:15.584 }, 00:24:15.584 { 00:24:15.584 "method": "bdev_raid_set_options", 00:24:15.584 "params": { 00:24:15.584 "process_window_size_kb": 1024, 00:24:15.584 "process_max_bandwidth_mb_sec": 0 00:24:15.584 } 00:24:15.584 }, 00:24:15.584 { 00:24:15.584 "method": "bdev_iscsi_set_options", 00:24:15.584 "params": { 00:24:15.584 "timeout_sec": 30 00:24:15.584 } 00:24:15.584 }, 00:24:15.584 { 00:24:15.584 "method": "bdev_nvme_set_options", 00:24:15.584 "params": { 00:24:15.584 "action_on_timeout": "none", 00:24:15.584 "timeout_us": 0, 00:24:15.584 "timeout_admin_us": 0, 00:24:15.584 "keep_alive_timeout_ms": 10000, 00:24:15.584 "arbitration_burst": 0, 00:24:15.584 "low_priority_weight": 0, 00:24:15.584 "medium_priority_weight": 0, 00:24:15.584 "high_priority_weight": 0, 00:24:15.584 "nvme_adminq_poll_period_us": 10000, 00:24:15.584 "nvme_ioq_poll_period_us": 0, 00:24:15.584 "io_queue_requests": 512, 00:24:15.584 "delay_cmd_submit": true, 00:24:15.584 "transport_retry_count": 4, 00:24:15.584 "bdev_retry_count": 3, 00:24:15.584 "transport_ack_timeout": 0, 00:24:15.584 "ctrlr_loss_timeout_sec": 0, 00:24:15.584 "reconnect_delay_sec": 0, 00:24:15.584 "fast_io_fail_timeout_sec": 0, 00:24:15.584 "disable_auto_failback": false, 00:24:15.584 "generate_uuids": false, 00:24:15.584 "transport_tos": 0, 00:24:15.584 "nvme_error_stat": false, 00:24:15.584 "rdma_srq_size": 0, 00:24:15.584 "io_path_stat": false, 00:24:15.584 "allow_accel_sequence": false, 00:24:15.584 "rdma_max_cq_size": 0, 00:24:15.584 "rdma_cm_event_timeout_ms": 0, 00:24:15.584 "dhchap_digests": [ 00:24:15.584 "sha256", 00:24:15.584 "sha384", 00:24:15.584 "sha512" 00:24:15.584 ], 00:24:15.584 "dhchap_dhgroups": [ 00:24:15.584 "null", 00:24:15.584 "ffdhe2048", 00:24:15.584 "ffdhe3072", 00:24:15.584 "ffdhe4096", 00:24:15.584 "ffdhe6144", 00:24:15.584 "ffdhe8192" 00:24:15.584 ] 00:24:15.584 } 00:24:15.584 }, 00:24:15.584 { 00:24:15.584 "method": "bdev_nvme_attach_controller", 00:24:15.584 "params": { 00:24:15.584 "name": "TLSTEST", 00:24:15.584 "trtype": "TCP", 00:24:15.584 "adrfam": "IPv4", 00:24:15.584 "traddr": "10.0.0.2", 00:24:15.584 "trsvcid": "4420", 00:24:15.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.584 "prchk_reftag": false, 00:24:15.584 "prchk_guard": false, 00:24:15.584 "ctrlr_loss_timeout_sec": 0, 00:24:15.584 "reconnect_delay_sec": 0, 00:24:15.584 "fast_io_fail_timeout_sec": 0, 00:24:15.584 "psk": "key0", 00:24:15.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:15.584 "hdgst": false, 00:24:15.584 "ddgst": false, 00:24:15.584 "multipath": "multipath" 00:24:15.584 } 00:24:15.584 }, 00:24:15.584 { 00:24:15.584 "method": "bdev_nvme_set_hotplug", 00:24:15.584 "params": { 00:24:15.584 "period_us": 100000, 00:24:15.584 "enable": false 00:24:15.584 } 00:24:15.584 }, 00:24:15.584 { 00:24:15.584 "method": "bdev_wait_for_examine" 00:24:15.584 } 00:24:15.584 ] 00:24:15.584 }, 00:24:15.584 { 00:24:15.584 "subsystem": "nbd", 00:24:15.584 "config": [] 00:24:15.584 } 00:24:15.584 ] 00:24:15.584 }' 00:24:15.584 07:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:15.584 07:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.585 [2024-11-18 07:58:08.563423] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:15.585 [2024-11-18 07:58:08.563534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771110 ] 00:24:15.585 [2024-11-18 07:58:08.632081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.844 [2024-11-18 07:58:08.677744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:15.844 [2024-11-18 07:58:08.853414] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:16.104 07:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:16.104 07:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:16.104 07:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:16.104 Running I/O for 10 seconds... 00:24:18.438 3308.00 IOPS, 12.92 MiB/s [2024-11-18T06:58:12.465Z] 3336.50 IOPS, 13.03 MiB/s [2024-11-18T06:58:13.406Z] 3340.67 IOPS, 13.05 MiB/s [2024-11-18T06:58:14.344Z] 3355.25 IOPS, 13.11 MiB/s [2024-11-18T06:58:15.285Z] 3367.40 IOPS, 13.15 MiB/s [2024-11-18T06:58:16.226Z] 3359.83 IOPS, 13.12 MiB/s [2024-11-18T06:58:17.166Z] 3361.43 IOPS, 13.13 MiB/s [2024-11-18T06:58:18.547Z] 3361.12 IOPS, 13.13 MiB/s [2024-11-18T06:58:19.481Z] 3360.89 IOPS, 13.13 MiB/s [2024-11-18T06:58:19.481Z] 3363.50 IOPS, 13.14 MiB/s 00:24:26.393 Latency(us) 00:24:26.393 [2024-11-18T06:58:19.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.393 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:26.393 Verification LBA range: start 0x0 length 0x2000 00:24:26.393 TLSTESTn1 : 10.02 3369.26 13.16 0.00 0.00 37930.59 7718.68 32816.55 00:24:26.393 [2024-11-18T06:58:19.481Z] =================================================================================================================== 00:24:26.393 [2024-11-18T06:58:19.481Z] Total : 3369.26 13.16 0.00 0.00 37930.59 7718.68 32816.55 00:24:26.393 { 00:24:26.393 "results": [ 00:24:26.393 { 00:24:26.393 "job": "TLSTESTn1", 00:24:26.393 "core_mask": "0x4", 00:24:26.393 "workload": "verify", 00:24:26.393 "status": "finished", 00:24:26.393 "verify_range": { 00:24:26.393 "start": 0, 00:24:26.393 "length": 8192 00:24:26.393 }, 00:24:26.393 "queue_depth": 128, 00:24:26.393 "io_size": 4096, 00:24:26.393 "runtime": 10.020019, 00:24:26.393 "iops": 3369.2550882388546, 00:24:26.393 "mibps": 13.161152688433026, 00:24:26.393 "io_failed": 0, 00:24:26.393 "io_timeout": 0, 00:24:26.393 "avg_latency_us": 37930.59351097069, 00:24:26.393 "min_latency_us": 7718.684444444444, 00:24:26.393 "max_latency_us": 32816.54518518518 00:24:26.393 } 00:24:26.393 ], 00:24:26.393 "core_count": 1 00:24:26.393 } 00:24:26.393 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 771110 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 771110 ']' 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 771110 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 771110 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 771110' 00:24:26.394 killing process with pid 771110 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 771110 00:24:26.394 Received shutdown signal, test time was about 10.000000 seconds 00:24:26.394 00:24:26.394 Latency(us) 00:24:26.394 [2024-11-18T06:58:19.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.394 [2024-11-18T06:58:19.482Z] =================================================================================================================== 00:24:26.394 [2024-11-18T06:58:19.482Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 771110 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 770953 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 770953 ']' 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 770953 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 770953 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 770953' 00:24:26.394 killing process with pid 770953 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 770953 00:24:26.394 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 770953 00:24:26.652 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:26.652 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:26.652 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:26.652 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.652 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=772425 00:24:26.652 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:26.652 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 772425 00:24:26.652 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 772425 ']' 00:24:26.652 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.652 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.652 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.652 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.652 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.652 [2024-11-18 07:58:19.662014] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:26.652 [2024-11-18 07:58:19.662125] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.652 [2024-11-18 07:58:19.735902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.910 [2024-11-18 07:58:19.777846] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.910 [2024-11-18 07:58:19.777910] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.910 [2024-11-18 07:58:19.777933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.910 [2024-11-18 07:58:19.777944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.910 [2024-11-18 07:58:19.777954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.910 [2024-11-18 07:58:19.778542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.911 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.911 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:26.911 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:26.911 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:26.911 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.911 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.911 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.JeruRQ0A6m 00:24:26.911 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JeruRQ0A6m 00:24:26.911 07:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:27.169 [2024-11-18 07:58:20.176647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.169 07:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:27.427 07:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:27.686 [2024-11-18 07:58:20.726152] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:27.686 [2024-11-18 07:58:20.726408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.686 07:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:27.944 malloc0 00:24:27.944 07:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:28.203 07:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JeruRQ0A6m 00:24:28.462 07:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:28.720 07:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=772709 00:24:28.720 07:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:28.720 07:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:28.720 07:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 772709 /var/tmp/bdevperf.sock 00:24:28.720 07:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 772709 ']' 00:24:28.720 07:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:28.720 07:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.720 07:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:28.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:28.720 07:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.720 07:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.978 [2024-11-18 07:58:21.847935] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:28.978 [2024-11-18 07:58:21.848014] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772709 ] 00:24:28.978 [2024-11-18 07:58:21.915059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.978 [2024-11-18 07:58:21.966445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.237 07:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.237 07:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:29.237 07:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JeruRQ0A6m 00:24:29.495 07:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:29.753 [2024-11-18 07:58:22.601288] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:29.753 nvme0n1 00:24:29.753 07:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:29.753 Running I/O for 1 seconds... 00:24:31.131 3557.00 IOPS, 13.89 MiB/s 00:24:31.131 Latency(us) 00:24:31.131 [2024-11-18T06:58:24.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.131 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:31.131 Verification LBA range: start 0x0 length 0x2000 00:24:31.131 nvme0n1 : 1.03 3583.79 14.00 0.00 0.00 35265.10 6359.42 31845.64 00:24:31.131 [2024-11-18T06:58:24.219Z] =================================================================================================================== 00:24:31.131 [2024-11-18T06:58:24.219Z] Total : 3583.79 14.00 0.00 0.00 35265.10 6359.42 31845.64 00:24:31.131 { 00:24:31.131 "results": [ 00:24:31.131 { 00:24:31.131 "job": "nvme0n1", 00:24:31.131 "core_mask": "0x2", 00:24:31.131 "workload": "verify", 00:24:31.131 "status": "finished", 00:24:31.131 "verify_range": { 00:24:31.131 "start": 0, 00:24:31.131 "length": 8192 00:24:31.131 }, 00:24:31.131 "queue_depth": 128, 00:24:31.131 "io_size": 4096, 00:24:31.131 "runtime": 1.028241, 00:24:31.131 "iops": 3583.790181484691, 00:24:31.131 "mibps": 13.999180396424574, 00:24:31.131 "io_failed": 0, 00:24:31.131 "io_timeout": 0, 00:24:31.131 "avg_latency_us": 35265.09977466204, 00:24:31.131 "min_latency_us": 6359.419259259259, 00:24:31.131 "max_latency_us": 31845.64148148148 00:24:31.131 } 00:24:31.131 ], 00:24:31.131 "core_count": 1 00:24:31.131 } 00:24:31.131 07:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 772709 00:24:31.131 07:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 772709 ']' 00:24:31.131 07:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 772709 00:24:31.131 07:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:31.131 07:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:31.131 07:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 772709 00:24:31.131 07:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:31.131 07:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:31.131 07:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 772709' 00:24:31.131 killing process with pid 772709 00:24:31.131 07:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 772709 00:24:31.131 Received shutdown signal, test time was about 1.000000 seconds 00:24:31.131 00:24:31.131 Latency(us) 00:24:31.131 [2024-11-18T06:58:24.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.131 [2024-11-18T06:58:24.219Z] =================================================================================================================== 00:24:31.131 [2024-11-18T06:58:24.219Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:31.132 07:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 772709 00:24:31.132 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 772425 00:24:31.132 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 772425 ']' 00:24:31.132 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 772425 00:24:31.132 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:31.132 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:31.132 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 772425 00:24:31.132 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:31.132 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:31.132 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 772425' 00:24:31.132 killing process with pid 772425 00:24:31.132 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 772425 00:24:31.132 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 772425 00:24:31.388 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:31.388 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:31.388 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:31.388 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.388 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=772996 00:24:31.388 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:31.388 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 772996 00:24:31.388 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 772996 ']' 00:24:31.388 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.388 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.388 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.389 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.389 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.389 [2024-11-18 07:58:24.352388] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:31.389 [2024-11-18 07:58:24.352467] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.389 [2024-11-18 07:58:24.425692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.389 [2024-11-18 07:58:24.469887] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.389 [2024-11-18 07:58:24.469943] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.389 [2024-11-18 07:58:24.469966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.389 [2024-11-18 07:58:24.469977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.389 [2024-11-18 07:58:24.469986] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.389 [2024-11-18 07:58:24.470555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.646 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:31.646 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:31.646 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:31.646 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:31.646 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.646 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.646 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:31.646 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.646 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.646 [2024-11-18 07:58:24.604692] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.646 malloc0 00:24:31.646 [2024-11-18 07:58:24.634924] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:31.646 [2024-11-18 07:58:24.635183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.646 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.646 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=773019 00:24:31.646 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:31.646 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 773019 /var/tmp/bdevperf.sock 00:24:31.646 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 773019 ']' 00:24:31.646 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.646 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.646 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.646 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.646 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.646 [2024-11-18 07:58:24.705996] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:31.646 [2024-11-18 07:58:24.706058] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773019 ] 00:24:31.904 [2024-11-18 07:58:24.772955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.904 [2024-11-18 07:58:24.818038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.904 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:31.904 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:31.904 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JeruRQ0A6m 00:24:32.162 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:32.420 [2024-11-18 07:58:25.508775] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:32.678 nvme0n1 00:24:32.678 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:32.678 Running I/O for 1 seconds... 00:24:34.053 3352.00 IOPS, 13.09 MiB/s 00:24:34.053 Latency(us) 00:24:34.053 [2024-11-18T06:58:27.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.053 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:34.053 Verification LBA range: start 0x0 length 0x2000 00:24:34.053 nvme0n1 : 1.02 3399.29 13.28 0.00 0.00 37211.66 7815.77 28932.93 00:24:34.053 [2024-11-18T06:58:27.141Z] =================================================================================================================== 00:24:34.053 [2024-11-18T06:58:27.141Z] Total : 3399.29 13.28 0.00 0.00 37211.66 7815.77 28932.93 00:24:34.053 { 00:24:34.053 "results": [ 00:24:34.053 { 00:24:34.053 "job": "nvme0n1", 00:24:34.053 "core_mask": "0x2", 00:24:34.053 "workload": "verify", 00:24:34.053 "status": "finished", 00:24:34.053 "verify_range": { 00:24:34.053 "start": 0, 00:24:34.053 "length": 8192 00:24:34.053 }, 00:24:34.053 "queue_depth": 128, 00:24:34.053 "io_size": 4096, 00:24:34.053 "runtime": 1.023742, 00:24:34.053 "iops": 3399.293962736705, 00:24:34.053 "mibps": 13.278492041940254, 00:24:34.053 "io_failed": 0, 00:24:34.053 "io_timeout": 0, 00:24:34.053 "avg_latency_us": 37211.65606470838, 00:24:34.053 "min_latency_us": 7815.774814814815, 00:24:34.053 "max_latency_us": 28932.93037037037 00:24:34.053 } 00:24:34.053 ], 00:24:34.053 "core_count": 1 00:24:34.053 } 00:24:34.053 07:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:34.053 07:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.053 07:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.053 07:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.053 07:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:34.053 "subsystems": [ 00:24:34.053 { 00:24:34.053 "subsystem": "keyring", 00:24:34.053 "config": [ 00:24:34.053 { 00:24:34.053 "method": "keyring_file_add_key", 00:24:34.053 "params": { 00:24:34.053 "name": "key0", 00:24:34.053 "path": "/tmp/tmp.JeruRQ0A6m" 00:24:34.053 } 00:24:34.053 } 00:24:34.053 ] 00:24:34.053 }, 00:24:34.053 { 00:24:34.053 "subsystem": "iobuf", 00:24:34.053 "config": [ 00:24:34.053 { 00:24:34.053 "method": "iobuf_set_options", 00:24:34.053 "params": { 00:24:34.053 "small_pool_count": 8192, 00:24:34.053 "large_pool_count": 1024, 00:24:34.053 "small_bufsize": 8192, 00:24:34.054 "large_bufsize": 135168, 00:24:34.054 "enable_numa": false 00:24:34.054 } 00:24:34.054 } 00:24:34.054 ] 00:24:34.054 }, 00:24:34.054 { 00:24:34.054 "subsystem": "sock", 00:24:34.054 "config": [ 00:24:34.054 { 00:24:34.054 "method": "sock_set_default_impl", 00:24:34.054 "params": { 00:24:34.054 "impl_name": "posix" 00:24:34.054 } 00:24:34.054 }, 00:24:34.054 { 00:24:34.054 "method": "sock_impl_set_options", 00:24:34.054 "params": { 00:24:34.054 "impl_name": "ssl", 00:24:34.054 "recv_buf_size": 4096, 00:24:34.054 "send_buf_size": 4096, 00:24:34.054 "enable_recv_pipe": true, 00:24:34.054 "enable_quickack": false, 00:24:34.054 "enable_placement_id": 0, 00:24:34.054 "enable_zerocopy_send_server": true, 00:24:34.054 "enable_zerocopy_send_client": false, 00:24:34.054 "zerocopy_threshold": 0, 00:24:34.054 "tls_version": 0, 00:24:34.054 "enable_ktls": false 00:24:34.054 } 00:24:34.054 }, 00:24:34.054 { 00:24:34.054 "method": "sock_impl_set_options", 00:24:34.054 "params": { 00:24:34.054 "impl_name": "posix", 00:24:34.054 "recv_buf_size": 2097152, 00:24:34.054 "send_buf_size": 2097152, 00:24:34.054 "enable_recv_pipe": true, 00:24:34.054 "enable_quickack": false, 00:24:34.054 "enable_placement_id": 0, 00:24:34.054 "enable_zerocopy_send_server": true, 00:24:34.054 "enable_zerocopy_send_client": false, 00:24:34.054 "zerocopy_threshold": 0, 00:24:34.054 "tls_version": 0, 00:24:34.054 "enable_ktls": false 00:24:34.054 } 00:24:34.054 } 00:24:34.054 ] 00:24:34.054 }, 00:24:34.054 { 00:24:34.054 "subsystem": "vmd", 00:24:34.054 "config": [] 00:24:34.054 }, 00:24:34.054 { 00:24:34.054 "subsystem": "accel", 00:24:34.054 "config": [ 00:24:34.054 { 00:24:34.054 "method": "accel_set_options", 00:24:34.054 "params": { 00:24:34.054 "small_cache_size": 128, 00:24:34.054 "large_cache_size": 16, 00:24:34.054 "task_count": 2048, 00:24:34.054 "sequence_count": 2048, 00:24:34.054 "buf_count": 2048 00:24:34.054 } 00:24:34.054 } 00:24:34.054 ] 00:24:34.054 }, 00:24:34.054 { 00:24:34.054 "subsystem": "bdev", 00:24:34.054 "config": [ 00:24:34.054 { 00:24:34.054 "method": "bdev_set_options", 00:24:34.054 "params": { 00:24:34.054 "bdev_io_pool_size": 65535, 00:24:34.054 "bdev_io_cache_size": 256, 00:24:34.054 "bdev_auto_examine": true, 00:24:34.054 "iobuf_small_cache_size": 128, 00:24:34.054 "iobuf_large_cache_size": 16 00:24:34.054 } 00:24:34.054 }, 00:24:34.054 { 00:24:34.054 "method": "bdev_raid_set_options", 00:24:34.054 "params": { 00:24:34.054 "process_window_size_kb": 1024, 00:24:34.054 "process_max_bandwidth_mb_sec": 0 00:24:34.054 } 00:24:34.054 }, 00:24:34.054 { 00:24:34.054 "method": "bdev_iscsi_set_options", 00:24:34.054 "params": { 00:24:34.054 "timeout_sec": 30 00:24:34.054 } 00:24:34.054 }, 00:24:34.054 { 00:24:34.054 "method": "bdev_nvme_set_options", 00:24:34.054 "params": { 00:24:34.054 "action_on_timeout": "none", 00:24:34.054 "timeout_us": 0, 00:24:34.054 "timeout_admin_us": 0, 00:24:34.054 "keep_alive_timeout_ms": 10000, 00:24:34.054 "arbitration_burst": 0, 00:24:34.054 "low_priority_weight": 0, 00:24:34.054 "medium_priority_weight": 0, 00:24:34.054 "high_priority_weight": 0, 00:24:34.054 "nvme_adminq_poll_period_us": 10000, 00:24:34.054 "nvme_ioq_poll_period_us": 0, 00:24:34.054 "io_queue_requests": 0, 00:24:34.054 "delay_cmd_submit": true, 00:24:34.054 "transport_retry_count": 4, 00:24:34.054 "bdev_retry_count": 3, 00:24:34.054 "transport_ack_timeout": 0, 00:24:34.054 "ctrlr_loss_timeout_sec": 0, 00:24:34.054 "reconnect_delay_sec": 0, 00:24:34.054 "fast_io_fail_timeout_sec": 0, 00:24:34.054 "disable_auto_failback": false, 00:24:34.054 "generate_uuids": false, 00:24:34.054 "transport_tos": 0, 00:24:34.054 "nvme_error_stat": false, 00:24:34.054 "rdma_srq_size": 0, 00:24:34.054 "io_path_stat": false, 00:24:34.054 "allow_accel_sequence": false, 00:24:34.054 "rdma_max_cq_size": 0, 00:24:34.054 "rdma_cm_event_timeout_ms": 0, 00:24:34.054 "dhchap_digests": [ 00:24:34.054 "sha256", 00:24:34.054 "sha384", 00:24:34.054 "sha512" 00:24:34.054 ], 00:24:34.054 "dhchap_dhgroups": [ 00:24:34.054 "null", 00:24:34.054 "ffdhe2048", 00:24:34.054 "ffdhe3072", 00:24:34.054 "ffdhe4096", 00:24:34.054 "ffdhe6144", 00:24:34.054 "ffdhe8192" 00:24:34.054 ] 00:24:34.054 } 00:24:34.054 }, 00:24:34.054 { 00:24:34.054 "method": "bdev_nvme_set_hotplug", 00:24:34.054 "params": { 00:24:34.054 "period_us": 100000, 00:24:34.054 "enable": false 00:24:34.054 } 00:24:34.054 }, 00:24:34.054 { 00:24:34.054 "method": "bdev_malloc_create", 00:24:34.054 "params": { 00:24:34.054 "name": "malloc0", 00:24:34.054 "num_blocks": 8192, 00:24:34.054 "block_size": 4096, 00:24:34.054 "physical_block_size": 4096, 00:24:34.054 "uuid": "2e7f8670-9996-411c-85c1-cca658eda98a", 00:24:34.054 "optimal_io_boundary": 0, 00:24:34.054 "md_size": 0, 00:24:34.054 "dif_type": 0, 00:24:34.054 "dif_is_head_of_md": false, 00:24:34.054 "dif_pi_format": 0 00:24:34.054 } 00:24:34.054 }, 00:24:34.054 { 00:24:34.054 "method": "bdev_wait_for_examine" 00:24:34.054 } 00:24:34.054 ] 00:24:34.054 }, 00:24:34.054 { 00:24:34.054 "subsystem": "nbd", 00:24:34.054 "config": [] 00:24:34.054 }, 00:24:34.054 { 00:24:34.054 "subsystem": "scheduler", 00:24:34.054 "config": [ 00:24:34.054 { 00:24:34.054 "method": "framework_set_scheduler", 00:24:34.054 "params": { 00:24:34.054 "name": "static" 00:24:34.054 } 00:24:34.054 } 00:24:34.054 ] 00:24:34.054 }, 00:24:34.054 { 00:24:34.054 "subsystem": "nvmf", 00:24:34.054 "config": [ 00:24:34.054 { 00:24:34.054 "method": "nvmf_set_config", 00:24:34.054 "params": { 00:24:34.054 "discovery_filter": "match_any", 00:24:34.054 "admin_cmd_passthru": { 00:24:34.054 "identify_ctrlr": false 00:24:34.054 }, 00:24:34.054 "dhchap_digests": [ 00:24:34.054 "sha256", 00:24:34.054 "sha384", 00:24:34.054 "sha512" 00:24:34.054 ], 00:24:34.054 "dhchap_dhgroups": [ 00:24:34.054 "null", 00:24:34.054 "ffdhe2048", 00:24:34.054 "ffdhe3072", 00:24:34.054 "ffdhe4096", 00:24:34.054 "ffdhe6144", 00:24:34.054 "ffdhe8192" 00:24:34.054 ] 00:24:34.054 } 00:24:34.054 }, 00:24:34.054 { 00:24:34.054 "method": "nvmf_set_max_subsystems", 00:24:34.054 "params": { 00:24:34.054 "max_subsystems": 1024 00:24:34.054 } 00:24:34.054 }, 00:24:34.054 { 00:24:34.054 "method": "nvmf_set_crdt", 00:24:34.054 "params": { 00:24:34.054 "crdt1": 0, 00:24:34.054 "crdt2": 0, 00:24:34.054 "crdt3": 0 00:24:34.054 } 00:24:34.054 }, 00:24:34.054 { 00:24:34.054 "method": "nvmf_create_transport", 00:24:34.054 "params": { 00:24:34.054 "trtype": "TCP", 00:24:34.054 "max_queue_depth": 128, 00:24:34.054 "max_io_qpairs_per_ctrlr": 127, 00:24:34.054 "in_capsule_data_size": 4096, 00:24:34.054 "max_io_size": 131072, 00:24:34.054 "io_unit_size": 131072, 00:24:34.054 "max_aq_depth": 128, 00:24:34.054 "num_shared_buffers": 511, 00:24:34.054 "buf_cache_size": 4294967295, 00:24:34.054 "dif_insert_or_strip": false, 00:24:34.054 "zcopy": false, 00:24:34.054 "c2h_success": false, 00:24:34.054 "sock_priority": 0, 00:24:34.054 "abort_timeout_sec": 1, 00:24:34.054 "ack_timeout": 0, 00:24:34.054 "data_wr_pool_size": 0 00:24:34.054 } 00:24:34.054 }, 00:24:34.055 { 00:24:34.055 "method": "nvmf_create_subsystem", 00:24:34.055 "params": { 00:24:34.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.055 "allow_any_host": false, 00:24:34.055 "serial_number": "00000000000000000000", 00:24:34.055 "model_number": "SPDK bdev Controller", 00:24:34.055 "max_namespaces": 32, 00:24:34.055 "min_cntlid": 1, 00:24:34.055 "max_cntlid": 65519, 00:24:34.055 "ana_reporting": false 00:24:34.055 } 00:24:34.055 }, 00:24:34.055 { 00:24:34.055 "method": "nvmf_subsystem_add_host", 00:24:34.055 "params": { 00:24:34.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.055 "host": "nqn.2016-06.io.spdk:host1", 00:24:34.055 "psk": "key0" 00:24:34.055 } 00:24:34.055 }, 00:24:34.055 { 00:24:34.055 "method": "nvmf_subsystem_add_ns", 00:24:34.055 "params": { 00:24:34.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.055 "namespace": { 00:24:34.055 "nsid": 1, 00:24:34.055 "bdev_name": "malloc0", 00:24:34.055 "nguid": "2E7F86709996411C85C1CCA658EDA98A", 00:24:34.055 "uuid": "2e7f8670-9996-411c-85c1-cca658eda98a", 00:24:34.055 "no_auto_visible": false 00:24:34.055 } 00:24:34.055 } 00:24:34.055 }, 00:24:34.055 { 00:24:34.055 "method": "nvmf_subsystem_add_listener", 00:24:34.055 "params": { 00:24:34.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.055 "listen_address": { 00:24:34.055 "trtype": "TCP", 00:24:34.055 "adrfam": "IPv4", 00:24:34.055 "traddr": "10.0.0.2", 00:24:34.055 "trsvcid": "4420" 00:24:34.055 }, 00:24:34.055 "secure_channel": false, 00:24:34.055 "sock_impl": "ssl" 00:24:34.055 } 00:24:34.055 } 00:24:34.055 ] 00:24:34.055 } 00:24:34.055 ] 00:24:34.055 }' 00:24:34.055 07:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:34.315 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:34.315 "subsystems": [ 00:24:34.315 { 00:24:34.315 "subsystem": "keyring", 00:24:34.315 "config": [ 00:24:34.315 { 00:24:34.315 "method": "keyring_file_add_key", 00:24:34.315 "params": { 00:24:34.315 "name": "key0", 00:24:34.315 "path": "/tmp/tmp.JeruRQ0A6m" 00:24:34.315 } 00:24:34.315 } 00:24:34.315 ] 00:24:34.315 }, 00:24:34.315 { 00:24:34.315 "subsystem": "iobuf", 00:24:34.315 "config": [ 00:24:34.315 { 00:24:34.315 "method": "iobuf_set_options", 00:24:34.315 "params": { 00:24:34.315 "small_pool_count": 8192, 00:24:34.315 "large_pool_count": 1024, 00:24:34.315 "small_bufsize": 8192, 00:24:34.315 "large_bufsize": 135168, 00:24:34.315 "enable_numa": false 00:24:34.315 } 00:24:34.315 } 00:24:34.315 ] 00:24:34.315 }, 00:24:34.315 { 00:24:34.315 "subsystem": "sock", 00:24:34.315 "config": [ 00:24:34.315 { 00:24:34.316 "method": "sock_set_default_impl", 00:24:34.316 "params": { 00:24:34.316 "impl_name": "posix" 00:24:34.316 } 00:24:34.316 }, 00:24:34.316 { 00:24:34.316 "method": "sock_impl_set_options", 00:24:34.316 "params": { 00:24:34.316 "impl_name": "ssl", 00:24:34.316 "recv_buf_size": 4096, 00:24:34.316 "send_buf_size": 4096, 00:24:34.316 "enable_recv_pipe": true, 00:24:34.316 "enable_quickack": false, 00:24:34.316 "enable_placement_id": 0, 00:24:34.316 "enable_zerocopy_send_server": true, 00:24:34.316 "enable_zerocopy_send_client": false, 00:24:34.316 "zerocopy_threshold": 0, 00:24:34.316 "tls_version": 0, 00:24:34.316 "enable_ktls": false 00:24:34.316 } 00:24:34.316 }, 00:24:34.316 { 00:24:34.316 "method": "sock_impl_set_options", 00:24:34.316 "params": { 00:24:34.316 "impl_name": "posix", 00:24:34.316 "recv_buf_size": 2097152, 00:24:34.316 "send_buf_size": 2097152, 00:24:34.316 "enable_recv_pipe": true, 00:24:34.316 "enable_quickack": false, 00:24:34.316 "enable_placement_id": 0, 00:24:34.316 "enable_zerocopy_send_server": true, 00:24:34.316 "enable_zerocopy_send_client": false, 00:24:34.316 "zerocopy_threshold": 0, 00:24:34.316 "tls_version": 0, 00:24:34.316 "enable_ktls": false 00:24:34.316 } 00:24:34.316 } 00:24:34.316 ] 00:24:34.316 }, 00:24:34.316 { 00:24:34.316 "subsystem": "vmd", 00:24:34.316 "config": [] 00:24:34.316 }, 00:24:34.316 { 00:24:34.316 "subsystem": "accel", 00:24:34.316 "config": [ 00:24:34.316 { 00:24:34.316 "method": "accel_set_options", 00:24:34.316 "params": { 00:24:34.316 "small_cache_size": 128, 00:24:34.316 "large_cache_size": 16, 00:24:34.316 "task_count": 2048, 00:24:34.316 "sequence_count": 2048, 00:24:34.316 "buf_count": 2048 00:24:34.316 } 00:24:34.316 } 00:24:34.316 ] 00:24:34.316 }, 00:24:34.316 { 00:24:34.316 "subsystem": "bdev", 00:24:34.316 "config": [ 00:24:34.316 { 00:24:34.316 "method": "bdev_set_options", 00:24:34.316 "params": { 00:24:34.316 "bdev_io_pool_size": 65535, 00:24:34.316 "bdev_io_cache_size": 256, 00:24:34.316 "bdev_auto_examine": true, 00:24:34.316 "iobuf_small_cache_size": 128, 00:24:34.316 "iobuf_large_cache_size": 16 00:24:34.316 } 00:24:34.316 }, 00:24:34.316 { 00:24:34.316 "method": "bdev_raid_set_options", 00:24:34.316 "params": { 00:24:34.316 "process_window_size_kb": 1024, 00:24:34.316 "process_max_bandwidth_mb_sec": 0 00:24:34.316 } 00:24:34.316 }, 00:24:34.316 { 00:24:34.316 "method": "bdev_iscsi_set_options", 00:24:34.316 "params": { 00:24:34.316 "timeout_sec": 30 00:24:34.316 } 00:24:34.316 }, 00:24:34.316 { 00:24:34.316 "method": "bdev_nvme_set_options", 00:24:34.316 "params": { 00:24:34.316 "action_on_timeout": "none", 00:24:34.316 "timeout_us": 0, 00:24:34.316 "timeout_admin_us": 0, 00:24:34.316 "keep_alive_timeout_ms": 10000, 00:24:34.316 "arbitration_burst": 0, 00:24:34.316 "low_priority_weight": 0, 00:24:34.316 "medium_priority_weight": 0, 00:24:34.316 "high_priority_weight": 0, 00:24:34.316 "nvme_adminq_poll_period_us": 10000, 00:24:34.316 "nvme_ioq_poll_period_us": 0, 00:24:34.316 "io_queue_requests": 512, 00:24:34.316 "delay_cmd_submit": true, 00:24:34.316 "transport_retry_count": 4, 00:24:34.316 "bdev_retry_count": 3, 00:24:34.316 "transport_ack_timeout": 0, 00:24:34.316 "ctrlr_loss_timeout_sec": 0, 00:24:34.316 "reconnect_delay_sec": 0, 00:24:34.316 "fast_io_fail_timeout_sec": 0, 00:24:34.316 "disable_auto_failback": false, 00:24:34.316 "generate_uuids": false, 00:24:34.316 "transport_tos": 0, 00:24:34.316 "nvme_error_stat": false, 00:24:34.316 "rdma_srq_size": 0, 00:24:34.316 "io_path_stat": false, 00:24:34.316 "allow_accel_sequence": false, 00:24:34.316 "rdma_max_cq_size": 0, 00:24:34.316 "rdma_cm_event_timeout_ms": 0, 00:24:34.316 "dhchap_digests": [ 00:24:34.316 "sha256", 00:24:34.316 "sha384", 00:24:34.316 "sha512" 00:24:34.316 ], 00:24:34.316 "dhchap_dhgroups": [ 00:24:34.316 "null", 00:24:34.316 "ffdhe2048", 00:24:34.316 "ffdhe3072", 00:24:34.316 "ffdhe4096", 00:24:34.316 "ffdhe6144", 00:24:34.316 "ffdhe8192" 00:24:34.316 ] 00:24:34.316 } 00:24:34.316 }, 00:24:34.316 { 00:24:34.316 "method": "bdev_nvme_attach_controller", 00:24:34.316 "params": { 00:24:34.316 "name": "nvme0", 00:24:34.316 "trtype": "TCP", 00:24:34.316 "adrfam": "IPv4", 00:24:34.316 "traddr": "10.0.0.2", 00:24:34.316 "trsvcid": "4420", 00:24:34.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.316 "prchk_reftag": false, 00:24:34.316 "prchk_guard": false, 00:24:34.316 "ctrlr_loss_timeout_sec": 0, 00:24:34.316 "reconnect_delay_sec": 0, 00:24:34.316 "fast_io_fail_timeout_sec": 0, 00:24:34.316 "psk": "key0", 00:24:34.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:34.316 "hdgst": false, 00:24:34.316 "ddgst": false, 00:24:34.316 "multipath": "multipath" 00:24:34.316 } 00:24:34.316 }, 00:24:34.316 { 00:24:34.316 "method": "bdev_nvme_set_hotplug", 00:24:34.316 "params": { 00:24:34.316 "period_us": 100000, 00:24:34.316 "enable": false 00:24:34.316 } 00:24:34.316 }, 00:24:34.316 { 00:24:34.317 "method": "bdev_enable_histogram", 00:24:34.317 "params": { 00:24:34.317 "name": "nvme0n1", 00:24:34.317 "enable": true 00:24:34.317 } 00:24:34.317 }, 00:24:34.317 { 00:24:34.317 "method": "bdev_wait_for_examine" 00:24:34.317 } 00:24:34.317 ] 00:24:34.317 }, 00:24:34.317 { 00:24:34.317 "subsystem": "nbd", 00:24:34.317 "config": [] 00:24:34.317 } 00:24:34.317 ] 00:24:34.317 }' 00:24:34.317 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 773019 00:24:34.317 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 773019 ']' 00:24:34.317 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 773019 00:24:34.317 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:34.317 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:34.317 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773019 00:24:34.317 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:34.317 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:34.317 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773019' 00:24:34.317 killing process with pid 773019 00:24:34.317 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 773019 00:24:34.317 Received shutdown signal, test time was about 1.000000 seconds 00:24:34.317 00:24:34.317 Latency(us) 00:24:34.317 [2024-11-18T06:58:27.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.317 [2024-11-18T06:58:27.405Z] =================================================================================================================== 00:24:34.317 [2024-11-18T06:58:27.405Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:34.317 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 773019 00:24:34.578 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 772996 00:24:34.578 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 772996 ']' 00:24:34.578 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 772996 00:24:34.578 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:34.578 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:34.578 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 772996 00:24:34.578 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:34.578 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:34.578 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 772996' 00:24:34.578 killing process with pid 772996 00:24:34.578 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 772996 00:24:34.578 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 772996 00:24:34.839 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:34.839 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:34.839 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:34.839 "subsystems": [ 00:24:34.839 { 00:24:34.839 "subsystem": "keyring", 00:24:34.839 "config": [ 00:24:34.839 { 00:24:34.839 "method": "keyring_file_add_key", 00:24:34.839 "params": { 00:24:34.839 "name": "key0", 00:24:34.839 "path": "/tmp/tmp.JeruRQ0A6m" 00:24:34.839 } 00:24:34.839 } 00:24:34.839 ] 00:24:34.839 }, 00:24:34.839 { 00:24:34.839 "subsystem": "iobuf", 00:24:34.839 "config": [ 00:24:34.839 { 00:24:34.839 "method": "iobuf_set_options", 00:24:34.839 "params": { 00:24:34.839 "small_pool_count": 8192, 00:24:34.839 "large_pool_count": 1024, 00:24:34.839 "small_bufsize": 8192, 00:24:34.839 "large_bufsize": 135168, 00:24:34.839 "enable_numa": false 00:24:34.839 } 00:24:34.839 } 00:24:34.839 ] 00:24:34.839 }, 00:24:34.839 { 00:24:34.839 "subsystem": "sock", 00:24:34.839 "config": [ 00:24:34.839 { 00:24:34.839 "method": "sock_set_default_impl", 00:24:34.839 "params": { 00:24:34.839 "impl_name": "posix" 00:24:34.839 } 00:24:34.839 }, 00:24:34.839 { 00:24:34.839 "method": "sock_impl_set_options", 00:24:34.839 "params": { 00:24:34.839 "impl_name": "ssl", 00:24:34.839 "recv_buf_size": 4096, 00:24:34.839 "send_buf_size": 4096, 00:24:34.839 "enable_recv_pipe": true, 00:24:34.839 "enable_quickack": false, 00:24:34.839 "enable_placement_id": 0, 00:24:34.839 "enable_zerocopy_send_server": true, 00:24:34.839 "enable_zerocopy_send_client": false, 00:24:34.839 "zerocopy_threshold": 0, 00:24:34.839 "tls_version": 0, 00:24:34.839 "enable_ktls": false 00:24:34.839 } 00:24:34.839 }, 00:24:34.839 { 00:24:34.839 "method": "sock_impl_set_options", 00:24:34.839 "params": { 00:24:34.839 "impl_name": "posix", 00:24:34.839 "recv_buf_size": 2097152, 00:24:34.839 "send_buf_size": 2097152, 00:24:34.839 "enable_recv_pipe": true, 00:24:34.839 "enable_quickack": false, 00:24:34.839 "enable_placement_id": 0, 00:24:34.839 "enable_zerocopy_send_server": true, 00:24:34.839 "enable_zerocopy_send_client": false, 00:24:34.839 "zerocopy_threshold": 0, 00:24:34.839 "tls_version": 0, 00:24:34.839 "enable_ktls": false 00:24:34.839 } 00:24:34.839 } 00:24:34.839 ] 00:24:34.839 }, 00:24:34.839 { 00:24:34.839 "subsystem": "vmd", 00:24:34.839 "config": [] 00:24:34.839 }, 00:24:34.839 { 00:24:34.839 "subsystem": "accel", 00:24:34.839 "config": [ 00:24:34.839 { 00:24:34.839 "method": "accel_set_options", 00:24:34.839 "params": { 00:24:34.839 "small_cache_size": 128, 00:24:34.839 "large_cache_size": 16, 00:24:34.839 "task_count": 2048, 00:24:34.839 "sequence_count": 2048, 00:24:34.839 "buf_count": 2048 00:24:34.839 } 00:24:34.839 } 00:24:34.839 ] 00:24:34.839 }, 00:24:34.839 { 00:24:34.839 "subsystem": "bdev", 00:24:34.839 "config": [ 00:24:34.839 { 00:24:34.839 "method": "bdev_set_options", 00:24:34.839 "params": { 00:24:34.839 "bdev_io_pool_size": 65535, 00:24:34.839 "bdev_io_cache_size": 256, 00:24:34.839 "bdev_auto_examine": true, 00:24:34.839 "iobuf_small_cache_size": 128, 00:24:34.839 "iobuf_large_cache_size": 16 00:24:34.839 } 00:24:34.839 }, 00:24:34.839 { 00:24:34.839 "method": "bdev_raid_set_options", 00:24:34.839 "params": { 00:24:34.839 "process_window_size_kb": 1024, 00:24:34.839 "process_max_bandwidth_mb_sec": 0 00:24:34.839 } 00:24:34.839 }, 00:24:34.839 { 00:24:34.839 "method": "bdev_iscsi_set_options", 00:24:34.839 "params": { 00:24:34.839 "timeout_sec": 30 00:24:34.839 } 00:24:34.839 }, 00:24:34.839 { 00:24:34.839 "method": "bdev_nvme_set_options", 00:24:34.839 "params": { 00:24:34.839 "action_on_timeout": "none", 00:24:34.839 "timeout_us": 0, 00:24:34.839 "timeout_admin_us": 0, 00:24:34.839 "keep_alive_timeout_ms": 10000, 00:24:34.839 "arbitration_burst": 0, 00:24:34.839 "low_priority_weight": 0, 00:24:34.839 "medium_priority_weight": 0, 00:24:34.839 "high_priority_weight": 0, 00:24:34.839 "nvme_adminq_poll_period_us": 10000, 00:24:34.839 "nvme_ioq_poll_period_us": 0, 00:24:34.839 "io_queue_requests": 0, 00:24:34.839 "delay_cmd_submit": true, 00:24:34.839 "transport_retry_count": 4, 00:24:34.839 "bdev_retry_count": 3, 00:24:34.839 "transport_ack_timeout": 0, 00:24:34.839 "ctrlr_loss_timeout_sec": 0, 00:24:34.839 "reconnect_delay_sec": 0, 00:24:34.839 "fast_io_fail_timeout_sec": 0, 00:24:34.839 "disable_auto_failback": false, 00:24:34.839 "generate_uuids": false, 00:24:34.839 "transport_tos": 0, 00:24:34.839 "nvme_error_stat": false, 00:24:34.839 "rdma_srq_size": 0, 00:24:34.839 "io_path_stat": false, 00:24:34.839 "allow_accel_sequence": false, 00:24:34.839 "rdma_max_cq_size": 0, 00:24:34.839 "rdma_cm_event_timeout_ms": 0, 00:24:34.839 "dhchap_digests": [ 00:24:34.839 "sha256", 00:24:34.839 "sha384", 00:24:34.839 "sha512" 00:24:34.839 ], 00:24:34.839 "dhchap_dhgroups": [ 00:24:34.839 "null", 00:24:34.839 "ffdhe2048", 00:24:34.839 "ffdhe3072", 00:24:34.839 "ffdhe4096", 00:24:34.840 "ffdhe6144", 00:24:34.840 "ffdhe8192" 00:24:34.840 ] 00:24:34.840 } 00:24:34.840 }, 00:24:34.840 { 00:24:34.840 "method": "bdev_nvme_set_hotplug", 00:24:34.840 "params": { 00:24:34.840 "period_us": 100000, 00:24:34.840 "enable": false 00:24:34.840 } 00:24:34.840 }, 00:24:34.840 { 00:24:34.840 "method": "bdev_malloc_create", 00:24:34.840 "params": { 00:24:34.840 "name": "malloc0", 00:24:34.840 "num_blocks": 8192, 00:24:34.840 "block_size": 4096, 00:24:34.840 "physical_block_size": 4096, 00:24:34.840 "uuid": "2e7f8670-9996-411c-85c1-cca658eda98a", 00:24:34.840 "optimal_io_boundary": 0, 00:24:34.840 "md_size": 0, 00:24:34.840 "dif_type": 0, 00:24:34.840 "dif_is_head_of_md": false, 00:24:34.840 "dif_pi_format": 0 00:24:34.840 } 00:24:34.840 }, 00:24:34.840 { 00:24:34.840 "method": "bdev_wait_for_examine" 00:24:34.840 } 00:24:34.840 ] 00:24:34.840 }, 00:24:34.840 { 00:24:34.840 "subsystem": "nbd", 00:24:34.840 "config": [] 00:24:34.840 }, 00:24:34.840 { 00:24:34.840 "subsystem": "scheduler", 00:24:34.840 "config": [ 00:24:34.840 { 00:24:34.840 "method": "framework_set_scheduler", 00:24:34.840 "params": { 00:24:34.840 "name": "static" 00:24:34.840 } 00:24:34.840 } 00:24:34.840 ] 00:24:34.840 }, 00:24:34.840 { 00:24:34.840 "subsystem": "nvmf", 00:24:34.840 "config": [ 00:24:34.840 { 00:24:34.840 "method": "nvmf_set_config", 00:24:34.840 "params": { 00:24:34.840 "discovery_filter": "match_any", 00:24:34.840 "admin_cmd_passthru": { 00:24:34.840 "identify_ctrlr": false 00:24:34.840 }, 00:24:34.840 "dhchap_digests": [ 00:24:34.840 "sha256", 00:24:34.840 "sha384", 00:24:34.840 "sha512" 00:24:34.840 ], 00:24:34.840 "dhchap_dhgroups": [ 00:24:34.840 "null", 00:24:34.840 "ffdhe2048", 00:24:34.840 "ffdhe3072", 00:24:34.840 "ffdhe4096", 00:24:34.840 "ffdhe6144", 00:24:34.840 "ffdhe8192" 00:24:34.840 ] 00:24:34.840 } 00:24:34.840 }, 00:24:34.840 { 00:24:34.840 "method": "nvmf_set_max_subsystems", 00:24:34.840 "params": { 00:24:34.840 "max_subsystems": 1024 00:24:34.840 } 00:24:34.840 }, 00:24:34.840 { 00:24:34.840 "method": "nvmf_set_crdt", 00:24:34.840 "params": { 00:24:34.840 "crdt1": 0, 00:24:34.840 "crdt2": 0, 00:24:34.840 "crdt3": 0 00:24:34.840 } 00:24:34.840 }, 00:24:34.840 { 00:24:34.840 "method": "nvmf_create_transport", 00:24:34.840 "params": { 00:24:34.840 "trtype": "TCP", 00:24:34.840 "max_queue_depth": 128, 00:24:34.840 "max_io_qpairs_per_ctrlr": 127, 00:24:34.840 "in_capsule_data_size": 4096, 00:24:34.840 "max_io_size": 131072, 00:24:34.840 "io_unit_size": 131072, 00:24:34.840 "max_aq_depth": 128, 00:24:34.840 "num_shared_buffers": 511, 00:24:34.840 "buf_cache_size": 4294967295, 00:24:34.840 "dif_insert_or_strip": false, 00:24:34.840 "zcopy": false, 00:24:34.840 "c2h_success": false, 00:24:34.840 "sock_priority": 0, 00:24:34.840 "abort_timeout_sec": 1, 00:24:34.840 "ack_timeout": 0, 00:24:34.840 "data_wr_pool_size": 0 00:24:34.840 } 00:24:34.840 }, 00:24:34.840 { 00:24:34.840 "method": "nvmf_create_subsystem", 00:24:34.840 "params": { 00:24:34.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.840 "allow_any_host": false, 00:24:34.840 "serial_number": "00000000000000000000", 00:24:34.840 "model_number": "SPDK bdev Controller", 00:24:34.840 "max_namespaces": 32, 00:24:34.840 "min_cntlid": 1, 00:24:34.840 "max_cntlid": 65519, 00:24:34.840 "ana_reporting": false 00:24:34.840 } 00:24:34.840 }, 00:24:34.840 { 00:24:34.840 "method": "nvmf_subsystem_add_host", 00:24:34.840 "params": { 00:24:34.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.840 "host": "nqn.2016-06.io.spdk:host1", 00:24:34.840 "psk": "key0" 00:24:34.840 } 00:24:34.840 }, 00:24:34.840 { 00:24:34.840 "method": "nvmf_subsystem_add_ns", 00:24:34.840 "params": { 00:24:34.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.840 "namespace": { 00:24:34.840 "nsid": 1, 00:24:34.840 "bdev_name": "malloc0", 00:24:34.840 "nguid": "2E7F86709996411C85C1CCA658EDA98A", 00:24:34.840 "uuid": "2e7f8670-9996-411c-85c1-cca658eda98a", 00:24:34.840 "no_auto_visible": false 00:24:34.840 } 00:24:34.840 } 00:24:34.840 }, 00:24:34.840 { 00:24:34.840 "method": "nvmf_subsystem_add_listener", 00:24:34.840 "params": { 00:24:34.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.840 "listen_address": { 00:24:34.840 "trtype": "TCP", 00:24:34.840 "adrfam": "IPv4", 00:24:34.840 "traddr": "10.0.0.2", 00:24:34.840 "trsvcid": "4420" 00:24:34.840 }, 00:24:34.840 "secure_channel": false, 00:24:34.840 "sock_impl": "ssl" 00:24:34.840 } 00:24:34.840 } 00:24:34.840 ] 00:24:34.840 } 00:24:34.840 ] 00:24:34.840 }' 00:24:34.840 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.840 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.840 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=773425 00:24:34.840 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:34.840 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 773425 00:24:34.840 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 773425 ']' 00:24:34.840 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.840 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.840 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.840 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.840 07:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.840 [2024-11-18 07:58:27.782793] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:34.840 [2024-11-18 07:58:27.782891] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.840 [2024-11-18 07:58:27.857049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.840 [2024-11-18 07:58:27.904889] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.840 [2024-11-18 07:58:27.904951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.840 [2024-11-18 07:58:27.904965] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.840 [2024-11-18 07:58:27.904985] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.840 [2024-11-18 07:58:27.904995] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.840 [2024-11-18 07:58:27.905665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.100 [2024-11-18 07:58:28.149248] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.100 [2024-11-18 07:58:28.181288] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:35.100 [2024-11-18 07:58:28.181601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.036 07:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.036 07:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:36.036 07:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:36.036 07:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:36.036 07:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.036 07:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.036 07:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=773575 00:24:36.036 07:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 773575 /var/tmp/bdevperf.sock 00:24:36.036 07:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 773575 ']' 00:24:36.036 07:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:36.036 07:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:36.036 07:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.036 07:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:36.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:36.036 07:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:36.036 "subsystems": [ 00:24:36.036 { 00:24:36.036 "subsystem": "keyring", 00:24:36.036 "config": [ 00:24:36.036 { 00:24:36.036 "method": "keyring_file_add_key", 00:24:36.036 "params": { 00:24:36.036 "name": "key0", 00:24:36.036 "path": "/tmp/tmp.JeruRQ0A6m" 00:24:36.036 } 00:24:36.036 } 00:24:36.036 ] 00:24:36.036 }, 00:24:36.036 { 00:24:36.036 "subsystem": "iobuf", 00:24:36.036 "config": [ 00:24:36.036 { 00:24:36.036 "method": "iobuf_set_options", 00:24:36.036 "params": { 00:24:36.036 "small_pool_count": 8192, 00:24:36.036 "large_pool_count": 1024, 00:24:36.036 "small_bufsize": 8192, 00:24:36.036 "large_bufsize": 135168, 00:24:36.036 "enable_numa": false 00:24:36.036 } 00:24:36.036 } 00:24:36.036 ] 00:24:36.036 }, 00:24:36.036 { 00:24:36.036 "subsystem": "sock", 00:24:36.036 "config": [ 00:24:36.036 { 00:24:36.036 "method": "sock_set_default_impl", 00:24:36.036 "params": { 00:24:36.036 "impl_name": "posix" 00:24:36.036 } 00:24:36.036 }, 00:24:36.036 { 00:24:36.036 "method": "sock_impl_set_options", 00:24:36.036 "params": { 00:24:36.036 "impl_name": "ssl", 00:24:36.036 "recv_buf_size": 4096, 00:24:36.036 "send_buf_size": 4096, 00:24:36.036 "enable_recv_pipe": true, 00:24:36.036 "enable_quickack": false, 00:24:36.036 "enable_placement_id": 0, 00:24:36.036 "enable_zerocopy_send_server": true, 00:24:36.036 "enable_zerocopy_send_client": false, 00:24:36.036 "zerocopy_threshold": 0, 00:24:36.036 "tls_version": 0, 00:24:36.036 "enable_ktls": false 00:24:36.036 } 00:24:36.036 }, 00:24:36.036 { 00:24:36.036 "method": "sock_impl_set_options", 00:24:36.036 "params": { 00:24:36.036 "impl_name": "posix", 00:24:36.036 "recv_buf_size": 2097152, 00:24:36.036 "send_buf_size": 2097152, 00:24:36.036 "enable_recv_pipe": true, 00:24:36.036 "enable_quickack": false, 00:24:36.036 "enable_placement_id": 0, 00:24:36.036 "enable_zerocopy_send_server": true, 00:24:36.036 "enable_zerocopy_send_client": false, 00:24:36.036 "zerocopy_threshold": 0, 00:24:36.036 "tls_version": 0, 00:24:36.036 "enable_ktls": false 00:24:36.036 } 00:24:36.036 } 00:24:36.036 ] 00:24:36.036 }, 00:24:36.036 { 00:24:36.036 "subsystem": "vmd", 00:24:36.036 "config": [] 00:24:36.036 }, 00:24:36.036 { 00:24:36.036 "subsystem": "accel", 00:24:36.036 "config": [ 00:24:36.036 { 00:24:36.036 "method": "accel_set_options", 00:24:36.036 "params": { 00:24:36.036 "small_cache_size": 128, 00:24:36.036 "large_cache_size": 16, 00:24:36.036 "task_count": 2048, 00:24:36.036 "sequence_count": 2048, 00:24:36.036 "buf_count": 2048 00:24:36.036 } 00:24:36.036 } 00:24:36.036 ] 00:24:36.036 }, 00:24:36.036 { 00:24:36.036 "subsystem": "bdev", 00:24:36.036 "config": [ 00:24:36.036 { 00:24:36.036 "method": "bdev_set_options", 00:24:36.036 "params": { 00:24:36.036 "bdev_io_pool_size": 65535, 00:24:36.036 "bdev_io_cache_size": 256, 00:24:36.036 "bdev_auto_examine": true, 00:24:36.036 "iobuf_small_cache_size": 128, 00:24:36.036 "iobuf_large_cache_size": 16 00:24:36.036 } 00:24:36.036 }, 00:24:36.036 { 00:24:36.036 "method": "bdev_raid_set_options", 00:24:36.036 "params": { 00:24:36.036 "process_window_size_kb": 1024, 00:24:36.036 "process_max_bandwidth_mb_sec": 0 00:24:36.036 } 00:24:36.036 }, 00:24:36.036 { 00:24:36.036 "method": "bdev_iscsi_set_options", 00:24:36.036 "params": { 00:24:36.036 "timeout_sec": 30 00:24:36.036 } 00:24:36.036 }, 00:24:36.036 { 00:24:36.036 "method": "bdev_nvme_set_options", 00:24:36.036 "params": { 00:24:36.036 "action_on_timeout": "none", 00:24:36.036 "timeout_us": 0, 00:24:36.036 "timeout_admin_us": 0, 00:24:36.036 "keep_alive_timeout_ms": 10000, 00:24:36.036 "arbitration_burst": 0, 00:24:36.036 "low_priority_weight": 0, 00:24:36.036 "medium_priority_weight": 0, 00:24:36.036 "high_priority_weight": 0, 00:24:36.036 "nvme_adminq_poll_period_us": 10000, 00:24:36.036 "nvme_ioq_poll_period_us": 0, 00:24:36.036 "io_queue_requests": 512, 00:24:36.036 "delay_cmd_submit": true, 00:24:36.036 "transport_retry_count": 4, 00:24:36.036 "bdev_retry_count": 3, 00:24:36.036 "transport_ack_timeout": 0, 00:24:36.036 "ctrlr_loss_timeout_sec": 0, 00:24:36.036 "reconnect_delay_sec": 0, 00:24:36.036 "fast_io_fail_timeout_sec": 0, 00:24:36.036 "disable_auto_failback": false, 00:24:36.036 "generate_uuids": false, 00:24:36.036 "transport_tos": 0, 00:24:36.036 "nvme_error_stat": false, 00:24:36.036 "rdma_srq_size": 0, 00:24:36.036 "io_path_stat": false, 00:24:36.036 "allow_accel_sequence": false, 00:24:36.036 "rdma_max_cq_size": 0, 00:24:36.036 "rdma_cm_event_timeout_ms": 0, 00:24:36.036 "dhchap_digests": [ 00:24:36.036 "sha256", 00:24:36.036 "sha384", 00:24:36.036 "sha512" 00:24:36.036 ], 00:24:36.036 "dhchap_dhgroups": [ 00:24:36.036 "null", 00:24:36.036 "ffdhe2048", 00:24:36.036 "ffdhe3072", 00:24:36.036 "ffdhe4096", 00:24:36.036 "ffdhe6144", 00:24:36.036 "ffdhe8192" 00:24:36.036 ] 00:24:36.036 } 00:24:36.036 }, 00:24:36.036 { 00:24:36.036 "method": "bdev_nvme_attach_controller", 00:24:36.036 "params": { 00:24:36.036 "name": "nvme0", 00:24:36.036 "trtype": "TCP", 00:24:36.036 "adrfam": "IPv4", 00:24:36.036 "traddr": "10.0.0.2", 00:24:36.036 "trsvcid": "4420", 00:24:36.036 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.036 "prchk_reftag": false, 00:24:36.036 "prchk_guard": false, 00:24:36.036 "ctrlr_loss_timeout_sec": 0, 00:24:36.036 "reconnect_delay_sec": 0, 00:24:36.036 "fast_io_fail_timeout_sec": 0, 00:24:36.036 "psk": "key0", 00:24:36.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:36.036 "hdgst": false, 00:24:36.036 "ddgst": false, 00:24:36.036 "multipath": "multipath" 00:24:36.036 } 00:24:36.036 }, 00:24:36.036 { 00:24:36.036 "method": "bdev_nvme_set_hotplug", 00:24:36.036 "params": { 00:24:36.036 "period_us": 100000, 00:24:36.036 "enable": false 00:24:36.036 } 00:24:36.036 }, 00:24:36.036 { 00:24:36.036 "method": "bdev_enable_histogram", 00:24:36.036 "params": { 00:24:36.036 "name": "nvme0n1", 00:24:36.036 "enable": true 00:24:36.036 } 00:24:36.037 }, 00:24:36.037 { 00:24:36.037 "method": "bdev_wait_for_examine" 00:24:36.037 } 00:24:36.037 ] 00:24:36.037 }, 00:24:36.037 { 00:24:36.037 "subsystem": "nbd", 00:24:36.037 "config": [] 00:24:36.037 } 00:24:36.037 ] 00:24:36.037 }' 00:24:36.037 07:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.037 07:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.037 [2024-11-18 07:58:28.904897] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:36.037 [2024-11-18 07:58:28.905001] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773575 ] 00:24:36.037 [2024-11-18 07:58:28.973825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.037 [2024-11-18 07:58:29.019047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.294 [2024-11-18 07:58:29.194936] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:36.294 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.294 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:36.294 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:36.294 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:36.551 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.551 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:36.810 Running I/O for 1 seconds... 00:24:37.749 3237.00 IOPS, 12.64 MiB/s 00:24:37.749 Latency(us) 00:24:37.749 [2024-11-18T06:58:30.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.749 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:37.749 Verification LBA range: start 0x0 length 0x2000 00:24:37.749 nvme0n1 : 1.02 3296.41 12.88 0.00 0.00 38491.28 6990.51 33593.27 00:24:37.749 [2024-11-18T06:58:30.837Z] =================================================================================================================== 00:24:37.749 [2024-11-18T06:58:30.837Z] Total : 3296.41 12.88 0.00 0.00 38491.28 6990.51 33593.27 00:24:37.749 { 00:24:37.749 "results": [ 00:24:37.749 { 00:24:37.749 "job": "nvme0n1", 00:24:37.749 "core_mask": "0x2", 00:24:37.749 "workload": "verify", 00:24:37.749 "status": "finished", 00:24:37.749 "verify_range": { 00:24:37.749 "start": 0, 00:24:37.749 "length": 8192 00:24:37.749 }, 00:24:37.749 "queue_depth": 128, 00:24:37.749 "io_size": 4096, 00:24:37.749 "runtime": 1.020808, 00:24:37.749 "iops": 3296.4083353578735, 00:24:37.749 "mibps": 12.876595059991693, 00:24:37.749 "io_failed": 0, 00:24:37.749 "io_timeout": 0, 00:24:37.749 "avg_latency_us": 38491.275362280554, 00:24:37.749 "min_latency_us": 6990.506666666667, 00:24:37.749 "max_latency_us": 33593.26814814815 00:24:37.749 } 00:24:37.749 ], 00:24:37.749 "core_count": 1 00:24:37.749 } 00:24:37.749 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:37.749 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:37.749 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:37.749 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:37.749 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:37.749 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:37.749 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:37.749 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:37.749 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:37.749 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:37.749 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:37.749 nvmf_trace.0 00:24:37.749 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:37.749 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 773575 00:24:37.749 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 773575 ']' 00:24:37.749 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 773575 00:24:37.749 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:37.749 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.009 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773575 00:24:38.009 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:38.009 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:38.009 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773575' 00:24:38.009 killing process with pid 773575 00:24:38.009 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 773575 00:24:38.009 Received shutdown signal, test time was about 1.000000 seconds 00:24:38.009 00:24:38.009 Latency(us) 00:24:38.009 [2024-11-18T06:58:31.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.009 [2024-11-18T06:58:31.097Z] =================================================================================================================== 00:24:38.009 [2024-11-18T06:58:31.097Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:38.009 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 773575 00:24:38.009 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:38.009 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:38.009 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:38.009 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:38.009 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:38.009 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:38.009 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:38.009 rmmod nvme_tcp 00:24:38.269 rmmod nvme_fabrics 00:24:38.269 rmmod nvme_keyring 00:24:38.269 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:38.269 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:38.269 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:38.269 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 773425 ']' 00:24:38.269 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 773425 00:24:38.269 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 773425 ']' 00:24:38.269 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 773425 00:24:38.269 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:38.269 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.269 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773425 00:24:38.269 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:38.269 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:38.269 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773425' 00:24:38.269 killing process with pid 773425 00:24:38.269 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 773425 00:24:38.270 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 773425 00:24:38.531 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:38.531 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:38.531 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:38.531 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:38.531 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:38.531 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:38.531 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:38.531 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:38.531 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:38.531 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.531 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.531 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.438 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:40.438 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.9gaA9XUlw5 /tmp/tmp.ULLX64rLaX /tmp/tmp.JeruRQ0A6m 00:24:40.438 00:24:40.438 real 1m22.176s 00:24:40.438 user 2m18.183s 00:24:40.438 sys 0m24.862s 00:24:40.438 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:40.438 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.438 ************************************ 00:24:40.438 END TEST nvmf_tls 00:24:40.438 ************************************ 00:24:40.438 07:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:40.438 07:58:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:40.438 07:58:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:40.438 07:58:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:40.438 ************************************ 00:24:40.438 START TEST nvmf_fips 00:24:40.438 ************************************ 00:24:40.438 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:40.700 * Looking for test storage... 00:24:40.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:40.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.700 --rc genhtml_branch_coverage=1 00:24:40.700 --rc genhtml_function_coverage=1 00:24:40.700 --rc genhtml_legend=1 00:24:40.700 --rc geninfo_all_blocks=1 00:24:40.700 --rc geninfo_unexecuted_blocks=1 00:24:40.700 00:24:40.700 ' 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:40.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.700 --rc genhtml_branch_coverage=1 00:24:40.700 --rc genhtml_function_coverage=1 00:24:40.700 --rc genhtml_legend=1 00:24:40.700 --rc geninfo_all_blocks=1 00:24:40.700 --rc geninfo_unexecuted_blocks=1 00:24:40.700 00:24:40.700 ' 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:40.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.700 --rc genhtml_branch_coverage=1 00:24:40.700 --rc genhtml_function_coverage=1 00:24:40.700 --rc genhtml_legend=1 00:24:40.700 --rc geninfo_all_blocks=1 00:24:40.700 --rc geninfo_unexecuted_blocks=1 00:24:40.700 00:24:40.700 ' 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:40.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.700 --rc genhtml_branch_coverage=1 00:24:40.700 --rc genhtml_function_coverage=1 00:24:40.700 --rc genhtml_legend=1 00:24:40.700 --rc geninfo_all_blocks=1 00:24:40.700 --rc geninfo_unexecuted_blocks=1 00:24:40.700 00:24:40.700 ' 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.700 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:40.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:40.701 Error setting digest 00:24:40.701 40725C875E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:40.701 40725C875E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.701 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:40.702 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:40.702 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:40.702 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.702 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.702 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.702 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:40.702 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:40.702 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:40.702 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:43.293 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:43.293 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:43.293 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:43.293 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:43.293 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:43.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:24:43.294 00:24:43.294 --- 10.0.0.2 ping statistics --- 00:24:43.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.294 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:24:43.294 00:24:43.294 --- 10.0.0.1 ping statistics --- 00:24:43.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.294 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=775817 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 775817 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 775817 ']' 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.294 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:43.294 [2024-11-18 07:58:36.271023] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:43.294 [2024-11-18 07:58:36.271131] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.294 [2024-11-18 07:58:36.342838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.553 [2024-11-18 07:58:36.387965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.553 [2024-11-18 07:58:36.388010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.553 [2024-11-18 07:58:36.388055] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.553 [2024-11-18 07:58:36.388065] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.553 [2024-11-18 07:58:36.388075] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.553 [2024-11-18 07:58:36.388652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.553 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.553 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:43.553 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:43.553 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:43.553 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:43.553 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.553 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:43.553 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:43.553 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:43.553 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.ciI 00:24:43.553 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:43.553 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.ciI 00:24:43.553 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.ciI 00:24:43.553 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.ciI 00:24:43.553 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:43.812 [2024-11-18 07:58:36.785348] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.812 [2024-11-18 07:58:36.801336] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:43.812 [2024-11-18 07:58:36.801571] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.812 malloc0 00:24:43.812 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:43.812 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=775966 00:24:43.812 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:43.812 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 775966 /var/tmp/bdevperf.sock 00:24:43.812 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 775966 ']' 00:24:43.812 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:43.812 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.812 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:43.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:43.812 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.812 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:44.070 [2024-11-18 07:58:36.925969] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:44.070 [2024-11-18 07:58:36.926046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775966 ] 00:24:44.070 [2024-11-18 07:58:36.997598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.070 [2024-11-18 07:58:37.043642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.070 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:44.070 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:44.070 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.ciI 00:24:44.330 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:44.589 [2024-11-18 07:58:37.671668] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:44.849 TLSTESTn1 00:24:44.849 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:44.849 Running I/O for 10 seconds... 00:24:46.793 3280.00 IOPS, 12.81 MiB/s [2024-11-18T06:58:41.262Z] 3408.50 IOPS, 13.31 MiB/s [2024-11-18T06:58:42.203Z] 3417.67 IOPS, 13.35 MiB/s [2024-11-18T06:58:43.142Z] 3442.25 IOPS, 13.45 MiB/s [2024-11-18T06:58:44.081Z] 3452.40 IOPS, 13.49 MiB/s [2024-11-18T06:58:45.018Z] 3465.17 IOPS, 13.54 MiB/s [2024-11-18T06:58:45.956Z] 3482.57 IOPS, 13.60 MiB/s [2024-11-18T06:58:47.334Z] 3485.50 IOPS, 13.62 MiB/s [2024-11-18T06:58:47.902Z] 3483.56 IOPS, 13.61 MiB/s [2024-11-18T06:58:48.160Z] 3452.80 IOPS, 13.49 MiB/s 00:24:55.072 Latency(us) 00:24:55.072 [2024-11-18T06:58:48.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.072 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:55.072 Verification LBA range: start 0x0 length 0x2000 00:24:55.072 TLSTESTn1 : 10.03 3455.78 13.50 0.00 0.00 36966.74 6213.78 48933.55 00:24:55.072 [2024-11-18T06:58:48.160Z] =================================================================================================================== 00:24:55.072 [2024-11-18T06:58:48.160Z] Total : 3455.78 13.50 0.00 0.00 36966.74 6213.78 48933.55 00:24:55.072 { 00:24:55.072 "results": [ 00:24:55.072 { 00:24:55.072 "job": "TLSTESTn1", 00:24:55.072 "core_mask": "0x4", 00:24:55.072 "workload": "verify", 00:24:55.072 "status": "finished", 00:24:55.072 "verify_range": { 00:24:55.072 "start": 0, 00:24:55.072 "length": 8192 00:24:55.072 }, 00:24:55.072 "queue_depth": 128, 00:24:55.072 "io_size": 4096, 00:24:55.072 "runtime": 10.028135, 00:24:55.072 "iops": 3455.7771709295894, 00:24:55.072 "mibps": 13.499129573943708, 00:24:55.072 "io_failed": 0, 00:24:55.072 "io_timeout": 0, 00:24:55.072 "avg_latency_us": 36966.73967533946, 00:24:55.072 "min_latency_us": 6213.783703703703, 00:24:55.072 "max_latency_us": 48933.54666666667 00:24:55.072 } 00:24:55.072 ], 00:24:55.072 "core_count": 1 00:24:55.072 } 00:24:55.072 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:55.072 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:55.072 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:55.072 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:55.072 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:55.072 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:55.072 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:55.072 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:55.072 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:55.072 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:55.072 nvmf_trace.0 00:24:55.072 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:55.072 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 775966 00:24:55.072 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 775966 ']' 00:24:55.072 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 775966 00:24:55.072 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:55.072 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:55.072 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 775966 00:24:55.072 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:55.072 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:55.072 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 775966' 00:24:55.072 killing process with pid 775966 00:24:55.072 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 775966 00:24:55.072 Received shutdown signal, test time was about 10.000000 seconds 00:24:55.072 00:24:55.072 Latency(us) 00:24:55.072 [2024-11-18T06:58:48.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.072 [2024-11-18T06:58:48.160Z] =================================================================================================================== 00:24:55.072 [2024-11-18T06:58:48.160Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:55.072 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 775966 00:24:55.330 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:55.330 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:55.330 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:55.330 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:55.330 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:55.330 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:55.330 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:55.330 rmmod nvme_tcp 00:24:55.330 rmmod nvme_fabrics 00:24:55.330 rmmod nvme_keyring 00:24:55.330 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:55.330 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:55.330 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:55.330 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 775817 ']' 00:24:55.330 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 775817 00:24:55.330 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 775817 ']' 00:24:55.330 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 775817 00:24:55.330 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:55.330 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:55.330 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 775817 00:24:55.331 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:55.331 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:55.331 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 775817' 00:24:55.331 killing process with pid 775817 00:24:55.331 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 775817 00:24:55.331 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 775817 00:24:55.591 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:55.591 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:55.591 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:55.591 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:55.591 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:55.591 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:55.591 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:55.591 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:55.591 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:55.591 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.591 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.591 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.501 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:57.501 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.ciI 00:24:57.501 00:24:57.501 real 0m17.064s 00:24:57.501 user 0m22.438s 00:24:57.501 sys 0m5.490s 00:24:57.501 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:57.501 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:57.501 ************************************ 00:24:57.501 END TEST nvmf_fips 00:24:57.501 ************************************ 00:24:57.501 07:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:57.501 07:58:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:57.501 07:58:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:57.501 07:58:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:57.762 ************************************ 00:24:57.762 START TEST nvmf_control_msg_list 00:24:57.762 ************************************ 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:57.762 * Looking for test storage... 00:24:57.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:57.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.762 --rc genhtml_branch_coverage=1 00:24:57.762 --rc genhtml_function_coverage=1 00:24:57.762 --rc genhtml_legend=1 00:24:57.762 --rc geninfo_all_blocks=1 00:24:57.762 --rc geninfo_unexecuted_blocks=1 00:24:57.762 00:24:57.762 ' 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:57.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.762 --rc genhtml_branch_coverage=1 00:24:57.762 --rc genhtml_function_coverage=1 00:24:57.762 --rc genhtml_legend=1 00:24:57.762 --rc geninfo_all_blocks=1 00:24:57.762 --rc geninfo_unexecuted_blocks=1 00:24:57.762 00:24:57.762 ' 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:57.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.762 --rc genhtml_branch_coverage=1 00:24:57.762 --rc genhtml_function_coverage=1 00:24:57.762 --rc genhtml_legend=1 00:24:57.762 --rc geninfo_all_blocks=1 00:24:57.762 --rc geninfo_unexecuted_blocks=1 00:24:57.762 00:24:57.762 ' 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:57.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.762 --rc genhtml_branch_coverage=1 00:24:57.762 --rc genhtml_function_coverage=1 00:24:57.762 --rc genhtml_legend=1 00:24:57.762 --rc geninfo_all_blocks=1 00:24:57.762 --rc geninfo_unexecuted_blocks=1 00:24:57.762 00:24:57.762 ' 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.762 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:57.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:57.763 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:00.301 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:00.302 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:00.302 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:00.302 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:00.302 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:00.302 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:00.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:25:00.302 00:25:00.302 --- 10.0.0.2 ping statistics --- 00:25:00.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.302 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:25:00.302 00:25:00.302 --- 10.0.0.1 ping statistics --- 00:25:00.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.302 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=779230 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 779230 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 779230 ']' 00:25:00.302 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.303 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.303 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.303 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.303 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.303 [2024-11-18 07:58:53.230545] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:25:00.303 [2024-11-18 07:58:53.230624] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.303 [2024-11-18 07:58:53.301305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.303 [2024-11-18 07:58:53.344129] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.303 [2024-11-18 07:58:53.344192] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.303 [2024-11-18 07:58:53.344216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.303 [2024-11-18 07:58:53.344226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.303 [2024-11-18 07:58:53.344235] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.303 [2024-11-18 07:58:53.344837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.561 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.561 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:25:00.561 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:00.561 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:00.561 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.561 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.561 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:00.561 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:00.561 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:00.561 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.561 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.561 [2024-11-18 07:58:53.478400] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.561 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.561 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:00.561 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.561 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.562 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.562 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:00.562 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.562 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.562 Malloc0 00:25:00.562 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.562 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:00.562 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.562 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.562 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.562 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:00.562 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.562 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.562 [2024-11-18 07:58:53.516628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.562 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.562 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=779251 00:25:00.562 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:00.562 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=779252 00:25:00.562 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:00.562 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=779253 00:25:00.562 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:00.562 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 779251 00:25:00.562 [2024-11-18 07:58:53.595700] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:00.562 [2024-11-18 07:58:53.596013] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:00.562 [2024-11-18 07:58:53.596278] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:01.944 Initializing NVMe Controllers 00:25:01.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:01.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:01.944 Initialization complete. Launching workers. 00:25:01.944 ======================================================== 00:25:01.944 Latency(us) 00:25:01.944 Device Information : IOPS MiB/s Average min max 00:25:01.944 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40959.90 40386.54 41898.91 00:25:01.944 ======================================================== 00:25:01.944 Total : 25.00 0.10 40959.90 40386.54 41898.91 00:25:01.944 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 779252 00:25:01.944 Initializing NVMe Controllers 00:25:01.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:01.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:01.944 Initialization complete. Launching workers. 00:25:01.944 ======================================================== 00:25:01.944 Latency(us) 00:25:01.944 Device Information : IOPS MiB/s Average min max 00:25:01.944 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5334.00 20.84 187.08 160.66 522.40 00:25:01.944 ======================================================== 00:25:01.944 Total : 5334.00 20.84 187.08 160.66 522.40 00:25:01.944 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 779253 00:25:01.944 Initializing NVMe Controllers 00:25:01.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:01.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:01.944 Initialization complete. Launching workers. 00:25:01.944 ======================================================== 00:25:01.944 Latency(us) 00:25:01.944 Device Information : IOPS MiB/s Average min max 00:25:01.944 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40897.28 40815.76 40987.90 00:25:01.944 ======================================================== 00:25:01.944 Total : 25.00 0.10 40897.28 40815.76 40987.90 00:25:01.944 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:01.944 rmmod nvme_tcp 00:25:01.944 rmmod nvme_fabrics 00:25:01.944 rmmod nvme_keyring 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 779230 ']' 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 779230 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 779230 ']' 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 779230 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 779230 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 779230' 00:25:01.944 killing process with pid 779230 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 779230 00:25:01.944 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 779230 00:25:02.205 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:02.205 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:02.205 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:02.205 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:02.205 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:25:02.205 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:02.205 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:25:02.205 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:02.205 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:02.205 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.205 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.205 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.112 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:04.112 00:25:04.113 real 0m6.571s 00:25:04.113 user 0m5.638s 00:25:04.113 sys 0m2.782s 00:25:04.113 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:04.113 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:04.113 ************************************ 00:25:04.113 END TEST nvmf_control_msg_list 00:25:04.113 ************************************ 00:25:04.113 07:58:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:04.113 07:58:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:04.113 07:58:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:04.113 07:58:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:04.371 ************************************ 00:25:04.371 START TEST nvmf_wait_for_buf 00:25:04.371 ************************************ 00:25:04.371 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:04.371 * Looking for test storage... 00:25:04.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:04.371 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:04.371 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:04.371 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:04.371 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:04.371 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:04.371 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:04.371 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:04.371 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:04.371 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:04.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.372 --rc genhtml_branch_coverage=1 00:25:04.372 --rc genhtml_function_coverage=1 00:25:04.372 --rc genhtml_legend=1 00:25:04.372 --rc geninfo_all_blocks=1 00:25:04.372 --rc geninfo_unexecuted_blocks=1 00:25:04.372 00:25:04.372 ' 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:04.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.372 --rc genhtml_branch_coverage=1 00:25:04.372 --rc genhtml_function_coverage=1 00:25:04.372 --rc genhtml_legend=1 00:25:04.372 --rc geninfo_all_blocks=1 00:25:04.372 --rc geninfo_unexecuted_blocks=1 00:25:04.372 00:25:04.372 ' 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:04.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.372 --rc genhtml_branch_coverage=1 00:25:04.372 --rc genhtml_function_coverage=1 00:25:04.372 --rc genhtml_legend=1 00:25:04.372 --rc geninfo_all_blocks=1 00:25:04.372 --rc geninfo_unexecuted_blocks=1 00:25:04.372 00:25:04.372 ' 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:04.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.372 --rc genhtml_branch_coverage=1 00:25:04.372 --rc genhtml_function_coverage=1 00:25:04.372 --rc genhtml_legend=1 00:25:04.372 --rc geninfo_all_blocks=1 00:25:04.372 --rc geninfo_unexecuted_blocks=1 00:25:04.372 00:25:04.372 ' 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:04.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.372 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:04.373 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.373 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:04.373 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:04.373 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:04.373 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:06.909 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:06.909 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:06.910 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:06.910 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:06.910 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:06.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:06.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:25:06.910 00:25:06.910 --- 10.0.0.2 ping statistics --- 00:25:06.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.910 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:06.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:06.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:25:06.910 00:25:06.910 --- 10.0.0.1 ping statistics --- 00:25:06.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.910 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=781446 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 781446 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 781446 ']' 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.910 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.910 [2024-11-18 07:58:59.693947] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:25:06.911 [2024-11-18 07:58:59.694035] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.911 [2024-11-18 07:58:59.770346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.911 [2024-11-18 07:58:59.816560] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.911 [2024-11-18 07:58:59.816626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.911 [2024-11-18 07:58:59.816640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.911 [2024-11-18 07:58:59.816651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.911 [2024-11-18 07:58:59.816660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.911 [2024-11-18 07:58:59.817278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.911 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.911 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:25:06.911 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:06.911 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:06.911 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.911 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.911 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:06.911 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:06.911 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:06.911 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.911 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.911 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.911 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:06.911 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.911 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.911 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.911 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:06.911 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.911 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.172 Malloc0 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.172 [2024-11-18 07:59:00.074243] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.172 [2024-11-18 07:59:00.098425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.172 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:07.172 [2024-11-18 07:59:00.181986] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:08.550 Initializing NVMe Controllers 00:25:08.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:08.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:08.550 Initialization complete. Launching workers. 00:25:08.550 ======================================================== 00:25:08.550 Latency(us) 00:25:08.550 Device Information : IOPS MiB/s Average min max 00:25:08.550 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 125.00 15.62 33325.69 23992.22 63857.74 00:25:08.550 ======================================================== 00:25:08.550 Total : 125.00 15.62 33325.69 23992.22 63857.74 00:25:08.550 00:25:08.550 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:08.550 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:08.550 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.550 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:08.550 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1974 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1974 -eq 0 ]] 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:08.810 rmmod nvme_tcp 00:25:08.810 rmmod nvme_fabrics 00:25:08.810 rmmod nvme_keyring 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 781446 ']' 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 781446 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 781446 ']' 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 781446 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 781446 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 781446' 00:25:08.810 killing process with pid 781446 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 781446 00:25:08.810 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 781446 00:25:09.070 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:09.070 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:09.071 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:09.071 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:09.071 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:09.071 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:09.071 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:09.071 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:09.071 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:09.071 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.071 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.071 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.978 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:10.978 00:25:10.978 real 0m6.761s 00:25:10.978 user 0m3.187s 00:25:10.978 sys 0m2.042s 00:25:10.978 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:10.978 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.978 ************************************ 00:25:10.978 END TEST nvmf_wait_for_buf 00:25:10.978 ************************************ 00:25:10.978 07:59:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:10.978 07:59:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:10.978 07:59:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:10.978 07:59:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:10.978 07:59:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:10.978 ************************************ 00:25:10.978 START TEST nvmf_fuzz 00:25:10.978 ************************************ 00:25:10.978 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:11.237 * Looking for test storage... 00:25:11.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:11.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.237 --rc genhtml_branch_coverage=1 00:25:11.237 --rc genhtml_function_coverage=1 00:25:11.237 --rc genhtml_legend=1 00:25:11.237 --rc geninfo_all_blocks=1 00:25:11.237 --rc geninfo_unexecuted_blocks=1 00:25:11.237 00:25:11.237 ' 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:11.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.237 --rc genhtml_branch_coverage=1 00:25:11.237 --rc genhtml_function_coverage=1 00:25:11.237 --rc genhtml_legend=1 00:25:11.237 --rc geninfo_all_blocks=1 00:25:11.237 --rc geninfo_unexecuted_blocks=1 00:25:11.237 00:25:11.237 ' 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:11.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.237 --rc genhtml_branch_coverage=1 00:25:11.237 --rc genhtml_function_coverage=1 00:25:11.237 --rc genhtml_legend=1 00:25:11.237 --rc geninfo_all_blocks=1 00:25:11.237 --rc geninfo_unexecuted_blocks=1 00:25:11.237 00:25:11.237 ' 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:11.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.237 --rc genhtml_branch_coverage=1 00:25:11.237 --rc genhtml_function_coverage=1 00:25:11.237 --rc genhtml_legend=1 00:25:11.237 --rc geninfo_all_blocks=1 00:25:11.237 --rc geninfo_unexecuted_blocks=1 00:25:11.237 00:25:11.237 ' 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:11.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:11.237 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:11.238 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:11.238 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:11.238 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.238 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:11.238 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:11.238 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:11.238 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.238 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.238 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.238 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:11.238 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:11.238 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:11.238 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.769 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.769 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:13.769 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:13.769 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:13.769 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:13.769 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:13.769 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:13.769 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:13.770 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:13.770 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:13.770 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:13.770 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:13.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:25:13.770 00:25:13.770 --- 10.0.0.2 ping statistics --- 00:25:13.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.770 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:25:13.770 00:25:13.770 --- 10.0.0.1 ping statistics --- 00:25:13.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.770 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:13.770 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=783659 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 783659 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 783659 ']' 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.771 Malloc0 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:13.771 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:45.865 Fuzzing completed. Shutting down the fuzz application 00:25:45.865 00:25:45.865 Dumping successful admin opcodes: 00:25:45.865 8, 9, 10, 24, 00:25:45.865 Dumping successful io opcodes: 00:25:45.865 0, 9, 00:25:45.865 NS: 0x2000008eff00 I/O qp, Total commands completed: 494724, total successful commands: 2848, random_seed: 1123851072 00:25:45.865 NS: 0x2000008eff00 admin qp, Total commands completed: 60352, total successful commands: 478, random_seed: 4196753856 00:25:45.865 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:45.865 Fuzzing completed. Shutting down the fuzz application 00:25:45.865 00:25:45.865 Dumping successful admin opcodes: 00:25:45.865 24, 00:25:45.865 Dumping successful io opcodes: 00:25:45.865 00:25:45.865 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1359189733 00:25:45.865 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1359301030 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:45.865 rmmod nvme_tcp 00:25:45.865 rmmod nvme_fabrics 00:25:45.865 rmmod nvme_keyring 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 783659 ']' 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 783659 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 783659 ']' 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 783659 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 783659 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 783659' 00:25:45.865 killing process with pid 783659 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 783659 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 783659 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:45.865 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:45.866 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:45.866 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:45.866 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:45.866 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:45.866 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.866 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.866 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.814 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:47.814 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:47.814 00:25:47.814 real 0m36.818s 00:25:47.814 user 0m49.974s 00:25:47.814 sys 0m15.311s 00:25:47.814 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:47.814 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:47.814 ************************************ 00:25:47.814 END TEST nvmf_fuzz 00:25:47.814 ************************************ 00:25:47.814 07:59:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:47.814 07:59:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:47.814 07:59:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:47.814 07:59:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:48.076 ************************************ 00:25:48.076 START TEST nvmf_multiconnection 00:25:48.076 ************************************ 00:25:48.076 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:48.076 * Looking for test storage... 00:25:48.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:48.076 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:48.076 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:25:48.076 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:48.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.076 --rc genhtml_branch_coverage=1 00:25:48.076 --rc genhtml_function_coverage=1 00:25:48.076 --rc genhtml_legend=1 00:25:48.076 --rc geninfo_all_blocks=1 00:25:48.076 --rc geninfo_unexecuted_blocks=1 00:25:48.076 00:25:48.076 ' 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:48.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.076 --rc genhtml_branch_coverage=1 00:25:48.076 --rc genhtml_function_coverage=1 00:25:48.076 --rc genhtml_legend=1 00:25:48.076 --rc geninfo_all_blocks=1 00:25:48.076 --rc geninfo_unexecuted_blocks=1 00:25:48.076 00:25:48.076 ' 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:48.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.076 --rc genhtml_branch_coverage=1 00:25:48.076 --rc genhtml_function_coverage=1 00:25:48.076 --rc genhtml_legend=1 00:25:48.076 --rc geninfo_all_blocks=1 00:25:48.076 --rc geninfo_unexecuted_blocks=1 00:25:48.076 00:25:48.076 ' 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:48.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.076 --rc genhtml_branch_coverage=1 00:25:48.076 --rc genhtml_function_coverage=1 00:25:48.076 --rc genhtml_legend=1 00:25:48.076 --rc geninfo_all_blocks=1 00:25:48.076 --rc geninfo_unexecuted_blocks=1 00:25:48.076 00:25:48.076 ' 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.076 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:48.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:48.077 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:50.611 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:50.611 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:50.611 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:50.611 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:50.611 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:50.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:25:50.612 00:25:50.612 --- 10.0.0.2 ping statistics --- 00:25:50.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.612 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:50.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:25:50.612 00:25:50.612 --- 10.0.0.1 ping statistics --- 00:25:50.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.612 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=789271 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 789271 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 789271 ']' 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.612 [2024-11-18 07:59:43.315100] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:25:50.612 [2024-11-18 07:59:43.315188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.612 [2024-11-18 07:59:43.390950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:50.612 [2024-11-18 07:59:43.440590] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.612 [2024-11-18 07:59:43.440655] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.612 [2024-11-18 07:59:43.440669] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.612 [2024-11-18 07:59:43.440681] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.612 [2024-11-18 07:59:43.440691] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.612 [2024-11-18 07:59:43.442290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.612 [2024-11-18 07:59:43.442359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:50.612 [2024-11-18 07:59:43.442426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:50.612 [2024-11-18 07:59:43.442429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.612 [2024-11-18 07:59:43.588226] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.612 Malloc1 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.612 [2024-11-18 07:59:43.653430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.612 Malloc2 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.612 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.872 Malloc3 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.872 Malloc4 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.872 Malloc5 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.872 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.873 Malloc6 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.873 Malloc7 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.873 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.133 Malloc8 00:25:51.133 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.133 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:51.133 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.133 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.133 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.133 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:51.133 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.133 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.133 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.133 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:51.133 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.133 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.133 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.133 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.133 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:51.133 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.133 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.133 Malloc9 00:25:51.133 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.133 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:51.133 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.133 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.133 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.133 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:51.133 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.133 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.133 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.133 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:51.133 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.133 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.133 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.133 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.133 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.134 Malloc10 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.134 Malloc11 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.134 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:52.068 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:52.068 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:52.068 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:52.068 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:52.068 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:53.974 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:53.974 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:53.974 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:53.974 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:53.974 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:53.974 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:53.974 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.974 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:54.543 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:54.543 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:54.543 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:54.543 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:54.543 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:56.449 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:56.449 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:56.449 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:56.449 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:56.449 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:56.449 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:56.449 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.449 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:57.383 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:57.383 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:57.383 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:57.383 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:57.383 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:59.290 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:59.290 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:59.290 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:59.290 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:59.290 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:59.290 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:59.290 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.290 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:00.230 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:00.230 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:00.230 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:00.230 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:00.230 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:02.134 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:02.134 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:02.134 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:26:02.134 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:02.134 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:02.134 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:02.134 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:02.134 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:02.700 07:59:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:02.700 07:59:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:02.700 07:59:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:02.700 07:59:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:02.700 07:59:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:05.234 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:05.234 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:05.234 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:26:05.234 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:05.234 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:05.234 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:05.234 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.234 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:05.494 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:05.494 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:05.494 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:05.494 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:05.494 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:07.397 08:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:07.397 08:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:07.397 08:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:26:07.397 08:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:07.397 08:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:07.397 08:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:07.397 08:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.397 08:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:08.332 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:08.332 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:08.332 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:08.332 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:08.332 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:10.866 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:10.866 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:10.866 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:26:10.866 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:10.866 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:10.866 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:10.866 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.866 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:11.125 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:11.125 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:11.125 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:11.125 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:11.125 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:13.658 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:13.658 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:13.658 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:26:13.658 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:13.658 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:13.658 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:13.658 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.658 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:14.227 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:14.227 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:14.227 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:14.227 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:14.227 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:16.128 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:16.128 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:16.128 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:26:16.128 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:16.128 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:16.128 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:16.128 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.128 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:17.068 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:17.068 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:17.068 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:17.068 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:17.068 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:19.604 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:19.604 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:19.604 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:19.604 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:19.604 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:19.604 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:19.604 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:19.604 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:20.173 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:20.173 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:20.173 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:20.173 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:20.173 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:22.076 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:22.076 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:22.076 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:22.076 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:22.076 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:22.076 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:22.076 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:22.076 [global] 00:26:22.076 thread=1 00:26:22.076 invalidate=1 00:26:22.076 rw=read 00:26:22.076 time_based=1 00:26:22.076 runtime=10 00:26:22.076 ioengine=libaio 00:26:22.076 direct=1 00:26:22.076 bs=262144 00:26:22.076 iodepth=64 00:26:22.076 norandommap=1 00:26:22.076 numjobs=1 00:26:22.076 00:26:22.076 [job0] 00:26:22.076 filename=/dev/nvme0n1 00:26:22.076 [job1] 00:26:22.076 filename=/dev/nvme10n1 00:26:22.076 [job2] 00:26:22.076 filename=/dev/nvme1n1 00:26:22.076 [job3] 00:26:22.076 filename=/dev/nvme2n1 00:26:22.076 [job4] 00:26:22.076 filename=/dev/nvme3n1 00:26:22.076 [job5] 00:26:22.076 filename=/dev/nvme4n1 00:26:22.076 [job6] 00:26:22.076 filename=/dev/nvme5n1 00:26:22.076 [job7] 00:26:22.076 filename=/dev/nvme6n1 00:26:22.076 [job8] 00:26:22.076 filename=/dev/nvme7n1 00:26:22.076 [job9] 00:26:22.076 filename=/dev/nvme8n1 00:26:22.076 [job10] 00:26:22.076 filename=/dev/nvme9n1 00:26:22.335 Could not set queue depth (nvme0n1) 00:26:22.335 Could not set queue depth (nvme10n1) 00:26:22.335 Could not set queue depth (nvme1n1) 00:26:22.335 Could not set queue depth (nvme2n1) 00:26:22.335 Could not set queue depth (nvme3n1) 00:26:22.335 Could not set queue depth (nvme4n1) 00:26:22.335 Could not set queue depth (nvme5n1) 00:26:22.335 Could not set queue depth (nvme6n1) 00:26:22.335 Could not set queue depth (nvme7n1) 00:26:22.335 Could not set queue depth (nvme8n1) 00:26:22.335 Could not set queue depth (nvme9n1) 00:26:22.335 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.335 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.335 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.335 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.335 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.335 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.335 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.335 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.335 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.335 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.335 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.335 fio-3.35 00:26:22.335 Starting 11 threads 00:26:34.613 00:26:34.613 job0: (groupid=0, jobs=1): err= 0: pid=794146: Mon Nov 18 08:00:25 2024 00:26:34.613 read: IOPS=123, BW=30.9MiB/s (32.4MB/s)(314MiB/10171msec) 00:26:34.613 slat (usec): min=8, max=414826, avg=4135.51, stdev=23625.19 00:26:34.613 clat (msec): min=35, max=1186, avg=513.28, stdev=281.22 00:26:34.613 lat (msec): min=35, max=1186, avg=517.41, stdev=283.68 00:26:34.613 clat percentiles (msec): 00:26:34.613 | 1.00th=[ 37], 5.00th=[ 130], 10.00th=[ 184], 20.00th=[ 249], 00:26:34.613 | 30.00th=[ 317], 40.00th=[ 414], 50.00th=[ 498], 60.00th=[ 542], 00:26:34.613 | 70.00th=[ 592], 80.00th=[ 776], 90.00th=[ 953], 95.00th=[ 1062], 00:26:34.613 | 99.00th=[ 1133], 99.50th=[ 1133], 99.90th=[ 1183], 99.95th=[ 1183], 00:26:34.613 | 99.99th=[ 1183] 00:26:34.613 bw ( KiB/s): min=12263, max=61440, per=4.88%, avg=30547.70, stdev=13107.78, samples=20 00:26:34.613 iops : min= 47, max= 240, avg=119.10, stdev=51.23, samples=20 00:26:34.613 lat (msec) : 50=1.27%, 100=2.94%, 250=16.15%, 500=30.79%, 750=28.80% 00:26:34.613 lat (msec) : 1000=12.49%, 2000=7.56% 00:26:34.613 cpu : usr=0.03%, sys=0.42%, ctx=198, majf=0, minf=4097 00:26:34.613 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:26:34.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.613 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.613 issued rwts: total=1257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.613 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.613 job1: (groupid=0, jobs=1): err= 0: pid=794147: Mon Nov 18 08:00:25 2024 00:26:34.613 read: IOPS=129, BW=32.5MiB/s (34.1MB/s)(331MiB/10196msec) 00:26:34.613 slat (usec): min=9, max=644937, avg=4050.74, stdev=26859.84 00:26:34.613 clat (msec): min=54, max=1196, avg=487.99, stdev=265.96 00:26:34.613 lat (msec): min=54, max=1336, avg=492.04, stdev=269.84 00:26:34.613 clat percentiles (msec): 00:26:34.613 | 1.00th=[ 71], 5.00th=[ 136], 10.00th=[ 199], 20.00th=[ 271], 00:26:34.613 | 30.00th=[ 317], 40.00th=[ 363], 50.00th=[ 426], 60.00th=[ 498], 00:26:34.613 | 70.00th=[ 567], 80.00th=[ 693], 90.00th=[ 944], 95.00th=[ 1011], 00:26:34.613 | 99.00th=[ 1167], 99.50th=[ 1183], 99.90th=[ 1200], 99.95th=[ 1200], 00:26:34.613 | 99.99th=[ 1200] 00:26:34.613 bw ( KiB/s): min= 9197, max=61440, per=5.43%, avg=33993.00, stdev=13061.54, samples=19 00:26:34.613 iops : min= 35, max= 240, avg=132.58, stdev=51.15, samples=19 00:26:34.613 lat (msec) : 100=1.58%, 250=14.72%, 500=44.23%, 750=21.13%, 1000=12.38% 00:26:34.613 lat (msec) : 2000=5.96% 00:26:34.613 cpu : usr=0.11%, sys=0.44%, ctx=261, majf=0, minf=4098 00:26:34.613 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.2% 00:26:34.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.613 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.613 issued rwts: total=1325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.613 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.613 job2: (groupid=0, jobs=1): err= 0: pid=794148: Mon Nov 18 08:00:25 2024 00:26:34.613 read: IOPS=99, BW=24.8MiB/s (26.0MB/s)(251MiB/10135msec) 00:26:34.613 slat (usec): min=13, max=270174, avg=8954.01, stdev=32285.01 00:26:34.613 clat (msec): min=11, max=1179, avg=636.01, stdev=267.88 00:26:34.613 lat (msec): min=11, max=1341, avg=644.97, stdev=271.88 00:26:34.613 clat percentiles (msec): 00:26:34.613 | 1.00th=[ 60], 5.00th=[ 144], 10.00th=[ 226], 20.00th=[ 435], 00:26:34.613 | 30.00th=[ 518], 40.00th=[ 567], 50.00th=[ 617], 60.00th=[ 693], 00:26:34.613 | 70.00th=[ 785], 80.00th=[ 877], 90.00th=[ 986], 95.00th=[ 1083], 00:26:34.613 | 99.00th=[ 1116], 99.50th=[ 1116], 99.90th=[ 1183], 99.95th=[ 1183], 00:26:34.613 | 99.99th=[ 1183] 00:26:34.613 bw ( KiB/s): min= 9216, max=39936, per=3.84%, avg=24072.30, stdev=8493.09, samples=20 00:26:34.613 iops : min= 36, max= 156, avg=93.75, stdev=33.18, samples=20 00:26:34.613 lat (msec) : 20=0.10%, 50=0.50%, 100=1.69%, 250=8.66%, 500=17.51% 00:26:34.613 lat (msec) : 750=37.61%, 1000=24.38%, 2000=9.55% 00:26:34.613 cpu : usr=0.03%, sys=0.44%, ctx=158, majf=0, minf=4098 00:26:34.613 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:26:34.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.613 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.613 issued rwts: total=1005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.613 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.613 job3: (groupid=0, jobs=1): err= 0: pid=794149: Mon Nov 18 08:00:25 2024 00:26:34.613 read: IOPS=167, BW=42.0MiB/s (44.0MB/s)(428MiB/10195msec) 00:26:34.613 slat (usec): min=11, max=369404, avg=4781.48, stdev=24645.85 00:26:34.613 clat (usec): min=534, max=1063.4k, avg=376147.17, stdev=294343.99 00:26:34.613 lat (usec): min=556, max=1077.4k, avg=380928.65, stdev=298545.66 00:26:34.613 clat percentiles (usec): 00:26:34.613 | 1.00th=[ 668], 5.00th=[ 18482], 10.00th=[ 22414], 00:26:34.613 | 20.00th=[ 46400], 30.00th=[ 132645], 40.00th=[ 233833], 00:26:34.613 | 50.00th=[ 337642], 60.00th=[ 476054], 70.00th=[ 566232], 00:26:34.613 | 80.00th=[ 624952], 90.00th=[ 801113], 95.00th=[ 918553], 00:26:34.613 | 99.00th=[1010828], 99.50th=[1019216], 99.90th=[1061159], 00:26:34.614 | 99.95th=[1061159], 99.99th=[1061159] 00:26:34.614 bw ( KiB/s): min=11264, max=120079, per=6.73%, avg=42119.00, stdev=31909.94, samples=20 00:26:34.614 iops : min= 44, max= 469, avg=164.35, stdev=124.65, samples=20 00:26:34.614 lat (usec) : 750=1.29%, 1000=0.29% 00:26:34.614 lat (msec) : 2=0.06%, 4=0.12%, 10=0.64%, 20=3.27%, 50=15.60% 00:26:34.614 lat (msec) : 100=4.62%, 250=15.31%, 500=21.98%, 750=23.44%, 1000=11.75% 00:26:34.614 lat (msec) : 2000=1.64% 00:26:34.614 cpu : usr=0.05%, sys=0.55%, ctx=330, majf=0, minf=3721 00:26:34.614 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:26:34.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.614 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.614 issued rwts: total=1711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.614 job4: (groupid=0, jobs=1): err= 0: pid=794150: Mon Nov 18 08:00:25 2024 00:26:34.614 read: IOPS=151, BW=37.9MiB/s (39.8MB/s)(384MiB/10134msec) 00:26:34.614 slat (usec): min=12, max=254853, avg=6530.02, stdev=25738.04 00:26:34.614 clat (msec): min=33, max=1173, avg=415.12, stdev=278.65 00:26:34.614 lat (msec): min=33, max=1173, avg=421.65, stdev=282.91 00:26:34.614 clat percentiles (msec): 00:26:34.614 | 1.00th=[ 36], 5.00th=[ 59], 10.00th=[ 70], 20.00th=[ 127], 00:26:34.614 | 30.00th=[ 169], 40.00th=[ 275], 50.00th=[ 405], 60.00th=[ 531], 00:26:34.614 | 70.00th=[ 600], 80.00th=[ 667], 90.00th=[ 785], 95.00th=[ 877], 00:26:34.614 | 99.00th=[ 1036], 99.50th=[ 1083], 99.90th=[ 1167], 99.95th=[ 1167], 00:26:34.614 | 99.99th=[ 1167] 00:26:34.614 bw ( KiB/s): min= 7680, max=145117, per=6.02%, avg=37671.55, stdev=33247.48, samples=20 00:26:34.614 iops : min= 30, max= 566, avg=146.95, stdev=129.76, samples=20 00:26:34.614 lat (msec) : 50=2.15%, 100=14.83%, 250=22.77%, 500=15.48%, 750=32.14% 00:26:34.614 lat (msec) : 1000=11.52%, 2000=1.11% 00:26:34.614 cpu : usr=0.14%, sys=0.50%, ctx=180, majf=0, minf=4097 00:26:34.614 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:26:34.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.614 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.614 issued rwts: total=1537,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.614 job5: (groupid=0, jobs=1): err= 0: pid=794151: Mon Nov 18 08:00:25 2024 00:26:34.614 read: IOPS=92, BW=23.1MiB/s (24.3MB/s)(236MiB/10178msec) 00:26:34.614 slat (usec): min=12, max=502763, avg=9224.84, stdev=38192.69 00:26:34.614 clat (msec): min=166, max=1423, avg=681.59, stdev=239.94 00:26:34.614 lat (msec): min=298, max=1423, avg=690.82, stdev=243.12 00:26:34.614 clat percentiles (msec): 00:26:34.614 | 1.00th=[ 300], 5.00th=[ 363], 10.00th=[ 430], 20.00th=[ 485], 00:26:34.614 | 30.00th=[ 514], 40.00th=[ 542], 50.00th=[ 625], 60.00th=[ 684], 00:26:34.614 | 70.00th=[ 785], 80.00th=[ 911], 90.00th=[ 1083], 95.00th=[ 1150], 00:26:34.614 | 99.00th=[ 1217], 99.50th=[ 1267], 99.90th=[ 1418], 99.95th=[ 1418], 00:26:34.614 | 99.99th=[ 1418] 00:26:34.614 bw ( KiB/s): min= 7664, max=37376, per=3.59%, avg=22493.75, stdev=9414.73, samples=20 00:26:34.614 iops : min= 29, max= 146, avg=87.60, stdev=37.03, samples=20 00:26:34.614 lat (msec) : 250=0.11%, 500=25.27%, 750=40.76%, 1000=20.17%, 2000=13.69% 00:26:34.614 cpu : usr=0.04%, sys=0.41%, ctx=123, majf=0, minf=4097 00:26:34.614 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.3% 00:26:34.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.614 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.614 issued rwts: total=942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.614 job6: (groupid=0, jobs=1): err= 0: pid=794152: Mon Nov 18 08:00:25 2024 00:26:34.614 read: IOPS=155, BW=38.9MiB/s (40.8MB/s)(397MiB/10199msec) 00:26:34.614 slat (usec): min=9, max=401636, avg=4188.76, stdev=23193.58 00:26:34.614 clat (msec): min=21, max=1277, avg=406.24, stdev=295.60 00:26:34.614 lat (msec): min=21, max=1277, avg=410.43, stdev=299.13 00:26:34.614 clat percentiles (msec): 00:26:34.614 | 1.00th=[ 27], 5.00th=[ 32], 10.00th=[ 41], 20.00th=[ 83], 00:26:34.614 | 30.00th=[ 207], 40.00th=[ 275], 50.00th=[ 376], 60.00th=[ 481], 00:26:34.614 | 70.00th=[ 527], 80.00th=[ 667], 90.00th=[ 844], 95.00th=[ 953], 00:26:34.614 | 99.00th=[ 1116], 99.50th=[ 1150], 99.90th=[ 1200], 99.95th=[ 1284], 00:26:34.614 | 99.99th=[ 1284] 00:26:34.614 bw ( KiB/s): min=11776, max=163512, per=6.23%, avg=39034.10, stdev=32130.08, samples=20 00:26:34.614 iops : min= 46, max= 638, avg=152.25, stdev=125.42, samples=20 00:26:34.614 lat (msec) : 50=17.43%, 100=3.65%, 250=13.66%, 500=29.64%, 750=20.83% 00:26:34.614 lat (msec) : 1000=10.45%, 2000=4.34% 00:26:34.614 cpu : usr=0.09%, sys=0.51%, ctx=397, majf=0, minf=4097 00:26:34.614 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:26:34.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.614 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.614 issued rwts: total=1589,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.614 job7: (groupid=0, jobs=1): err= 0: pid=794153: Mon Nov 18 08:00:25 2024 00:26:34.614 read: IOPS=497, BW=124MiB/s (130MB/s)(1265MiB/10174msec) 00:26:34.614 slat (usec): min=8, max=619436, avg=1539.81, stdev=12967.62 00:26:34.614 clat (msec): min=32, max=1113, avg=127.01, stdev=178.16 00:26:34.614 lat (msec): min=32, max=1113, avg=128.55, stdev=180.01 00:26:34.614 clat percentiles (msec): 00:26:34.614 | 1.00th=[ 43], 5.00th=[ 50], 10.00th=[ 53], 20.00th=[ 55], 00:26:34.614 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 63], 00:26:34.614 | 70.00th=[ 64], 80.00th=[ 78], 90.00th=[ 309], 95.00th=[ 592], 00:26:34.614 | 99.00th=[ 852], 99.50th=[ 927], 99.90th=[ 1116], 99.95th=[ 1116], 00:26:34.614 | 99.99th=[ 1116] 00:26:34.614 bw ( KiB/s): min= 9216, max=286208, per=20.42%, avg=127879.90, stdev=115875.70, samples=20 00:26:34.614 iops : min= 36, max= 1118, avg=499.30, stdev=452.63, samples=20 00:26:34.614 lat (msec) : 50=5.87%, 100=75.95%, 250=5.75%, 500=5.93%, 750=4.01% 00:26:34.614 lat (msec) : 1000=2.19%, 2000=0.30% 00:26:34.614 cpu : usr=0.34%, sys=1.45%, ctx=546, majf=0, minf=4097 00:26:34.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:34.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.614 issued rwts: total=5061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.614 job8: (groupid=0, jobs=1): err= 0: pid=794154: Mon Nov 18 08:00:25 2024 00:26:34.614 read: IOPS=622, BW=156MiB/s (163MB/s)(1588MiB/10200msec) 00:26:34.614 slat (usec): min=12, max=288436, avg=1469.92, stdev=6514.00 00:26:34.614 clat (usec): min=1589, max=622564, avg=101226.69, stdev=86551.92 00:26:34.614 lat (usec): min=1642, max=721033, avg=102696.61, stdev=87751.42 00:26:34.614 clat percentiles (msec): 00:26:34.614 | 1.00th=[ 16], 5.00th=[ 31], 10.00th=[ 37], 20.00th=[ 43], 00:26:34.614 | 30.00th=[ 51], 40.00th=[ 74], 50.00th=[ 79], 60.00th=[ 82], 00:26:34.614 | 70.00th=[ 87], 80.00th=[ 136], 90.00th=[ 215], 95.00th=[ 305], 00:26:34.614 | 99.00th=[ 426], 99.50th=[ 489], 99.90th=[ 592], 99.95th=[ 592], 00:26:34.614 | 99.99th=[ 625] 00:26:34.614 bw ( KiB/s): min=34885, max=434688, per=25.69%, avg=160889.95, stdev=104248.52, samples=20 00:26:34.614 iops : min= 136, max= 1698, avg=628.35, stdev=407.25, samples=20 00:26:34.614 lat (msec) : 2=0.02%, 4=0.03%, 10=0.13%, 20=1.17%, 50=28.31% 00:26:34.614 lat (msec) : 100=46.34%, 250=16.25%, 500=7.34%, 750=0.43% 00:26:34.614 cpu : usr=0.37%, sys=2.09%, ctx=1373, majf=0, minf=4097 00:26:34.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:34.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.614 issued rwts: total=6351,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.614 job9: (groupid=0, jobs=1): err= 0: pid=794155: Mon Nov 18 08:00:25 2024 00:26:34.614 read: IOPS=144, BW=36.0MiB/s (37.8MB/s)(367MiB/10176msec) 00:26:34.614 slat (usec): min=8, max=403184, avg=4729.66, stdev=21867.31 00:26:34.614 clat (msec): min=40, max=1125, avg=438.83, stdev=279.13 00:26:34.614 lat (msec): min=40, max=1125, avg=443.56, stdev=280.69 00:26:34.614 clat percentiles (msec): 00:26:34.614 | 1.00th=[ 57], 5.00th=[ 115], 10.00th=[ 144], 20.00th=[ 169], 00:26:34.614 | 30.00th=[ 218], 40.00th=[ 300], 50.00th=[ 388], 60.00th=[ 460], 00:26:34.614 | 70.00th=[ 542], 80.00th=[ 776], 90.00th=[ 869], 95.00th=[ 936], 00:26:34.614 | 99.00th=[ 1053], 99.50th=[ 1133], 99.90th=[ 1133], 99.95th=[ 1133], 00:26:34.614 | 99.99th=[ 1133] 00:26:34.614 bw ( KiB/s): min=13824, max=94720, per=6.03%, avg=37788.63, stdev=23532.52, samples=19 00:26:34.614 iops : min= 54, max= 370, avg=147.42, stdev=91.96, samples=19 00:26:34.614 lat (msec) : 50=0.48%, 100=3.20%, 250=31.22%, 500=31.70%, 750=12.07% 00:26:34.614 lat (msec) : 1000=18.75%, 2000=2.59% 00:26:34.614 cpu : usr=0.09%, sys=0.56%, ctx=241, majf=0, minf=4097 00:26:34.614 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:26:34.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.614 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.614 issued rwts: total=1467,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.614 job10: (groupid=0, jobs=1): err= 0: pid=794156: Mon Nov 18 08:00:25 2024 00:26:34.614 read: IOPS=266, BW=66.5MiB/s (69.7MB/s)(677MiB/10179msec) 00:26:34.614 slat (usec): min=9, max=462207, avg=2396.16, stdev=15421.93 00:26:34.614 clat (msec): min=10, max=1554, avg=237.97, stdev=297.91 00:26:34.614 lat (msec): min=10, max=1554, avg=240.37, stdev=299.50 00:26:34.614 clat percentiles (msec): 00:26:34.614 | 1.00th=[ 19], 5.00th=[ 38], 10.00th=[ 41], 20.00th=[ 46], 00:26:34.614 | 30.00th=[ 49], 40.00th=[ 56], 50.00th=[ 92], 60.00th=[ 165], 00:26:34.614 | 70.00th=[ 249], 80.00th=[ 401], 90.00th=[ 651], 95.00th=[ 978], 00:26:34.614 | 99.00th=[ 1267], 99.50th=[ 1351], 99.90th=[ 1469], 99.95th=[ 1552], 00:26:34.614 | 99.99th=[ 1552] 00:26:34.615 bw ( KiB/s): min= 3584, max=352768, per=10.80%, avg=67657.40, stdev=85094.67, samples=20 00:26:34.615 iops : min= 14, max= 1378, avg=264.10, stdev=332.47, samples=20 00:26:34.615 lat (msec) : 20=1.07%, 50=32.27%, 100=17.69%, 250=19.05%, 500=16.73% 00:26:34.615 lat (msec) : 750=4.17%, 1000=4.28%, 2000=4.73% 00:26:34.615 cpu : usr=0.09%, sys=0.81%, ctx=307, majf=0, minf=4097 00:26:34.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:34.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.615 issued rwts: total=2708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.615 00:26:34.615 Run status group 0 (all jobs): 00:26:34.615 READ: bw=612MiB/s (641MB/s), 23.1MiB/s-156MiB/s (24.3MB/s-163MB/s), io=6238MiB (6541MB), run=10134-10200msec 00:26:34.615 00:26:34.615 Disk stats (read/write): 00:26:34.615 nvme0n1: ios=2384/0, merge=0/0, ticks=1228190/0, in_queue=1228190, util=97.25% 00:26:34.615 nvme10n1: ios=2648/0, merge=0/0, ticks=1277453/0, in_queue=1277453, util=97.53% 00:26:34.615 nvme1n1: ios=1844/0, merge=0/0, ticks=1200461/0, in_queue=1200461, util=97.74% 00:26:34.615 nvme2n1: ios=3284/0, merge=0/0, ticks=1210210/0, in_queue=1210210, util=97.87% 00:26:34.615 nvme3n1: ios=2947/0, merge=0/0, ticks=1217134/0, in_queue=1217134, util=97.94% 00:26:34.615 nvme4n1: ios=1757/0, merge=0/0, ticks=1201900/0, in_queue=1201900, util=98.25% 00:26:34.615 nvme5n1: ios=3127/0, merge=0/0, ticks=1269199/0, in_queue=1269199, util=98.45% 00:26:34.615 nvme6n1: ios=9994/0, merge=0/0, ticks=1214559/0, in_queue=1214559, util=98.51% 00:26:34.615 nvme7n1: ios=12645/0, merge=0/0, ticks=1252951/0, in_queue=1252951, util=98.93% 00:26:34.615 nvme8n1: ios=2807/0, merge=0/0, ticks=1187434/0, in_queue=1187434, util=99.08% 00:26:34.615 nvme9n1: ios=5278/0, merge=0/0, ticks=1188822/0, in_queue=1188822, util=99.20% 00:26:34.615 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:34.615 [global] 00:26:34.615 thread=1 00:26:34.615 invalidate=1 00:26:34.615 rw=randwrite 00:26:34.615 time_based=1 00:26:34.615 runtime=10 00:26:34.615 ioengine=libaio 00:26:34.615 direct=1 00:26:34.615 bs=262144 00:26:34.615 iodepth=64 00:26:34.615 norandommap=1 00:26:34.615 numjobs=1 00:26:34.615 00:26:34.615 [job0] 00:26:34.615 filename=/dev/nvme0n1 00:26:34.615 [job1] 00:26:34.615 filename=/dev/nvme10n1 00:26:34.615 [job2] 00:26:34.615 filename=/dev/nvme1n1 00:26:34.615 [job3] 00:26:34.615 filename=/dev/nvme2n1 00:26:34.615 [job4] 00:26:34.615 filename=/dev/nvme3n1 00:26:34.615 [job5] 00:26:34.615 filename=/dev/nvme4n1 00:26:34.615 [job6] 00:26:34.615 filename=/dev/nvme5n1 00:26:34.615 [job7] 00:26:34.615 filename=/dev/nvme6n1 00:26:34.615 [job8] 00:26:34.615 filename=/dev/nvme7n1 00:26:34.615 [job9] 00:26:34.615 filename=/dev/nvme8n1 00:26:34.615 [job10] 00:26:34.615 filename=/dev/nvme9n1 00:26:34.615 Could not set queue depth (nvme0n1) 00:26:34.615 Could not set queue depth (nvme10n1) 00:26:34.615 Could not set queue depth (nvme1n1) 00:26:34.615 Could not set queue depth (nvme2n1) 00:26:34.615 Could not set queue depth (nvme3n1) 00:26:34.615 Could not set queue depth (nvme4n1) 00:26:34.615 Could not set queue depth (nvme5n1) 00:26:34.615 Could not set queue depth (nvme6n1) 00:26:34.615 Could not set queue depth (nvme7n1) 00:26:34.615 Could not set queue depth (nvme8n1) 00:26:34.615 Could not set queue depth (nvme9n1) 00:26:34.615 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.615 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.615 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.615 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.615 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.615 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.615 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.615 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.615 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.615 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.615 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.615 fio-3.35 00:26:34.615 Starting 11 threads 00:26:44.596 00:26:44.596 job0: (groupid=0, jobs=1): err= 0: pid=794891: Mon Nov 18 08:00:36 2024 00:26:44.596 write: IOPS=359, BW=89.8MiB/s (94.1MB/s)(914MiB/10183msec); 0 zone resets 00:26:44.596 slat (usec): min=23, max=237346, avg=2092.65, stdev=7449.52 00:26:44.596 clat (msec): min=3, max=628, avg=175.95, stdev=161.30 00:26:44.596 lat (msec): min=3, max=637, avg=178.04, stdev=163.11 00:26:44.596 clat percentiles (msec): 00:26:44.596 | 1.00th=[ 21], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 49], 00:26:44.596 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 85], 60.00th=[ 148], 00:26:44.596 | 70.00th=[ 243], 80.00th=[ 363], 90.00th=[ 439], 95.00th=[ 485], 00:26:44.596 | 99.00th=[ 567], 99.50th=[ 584], 99.90th=[ 609], 99.95th=[ 617], 00:26:44.596 | 99.99th=[ 625] 00:26:44.596 bw ( KiB/s): min=26624, max=326003, per=10.35%, avg=91980.45, stdev=94259.49, samples=20 00:26:44.596 iops : min= 104, max= 1273, avg=359.25, stdev=368.15, samples=20 00:26:44.596 lat (msec) : 4=0.05%, 10=0.19%, 20=0.74%, 50=34.70%, 100=17.25% 00:26:44.596 lat (msec) : 250=17.47%, 500=25.46%, 750=4.13% 00:26:44.596 cpu : usr=1.06%, sys=1.22%, ctx=1475, majf=0, minf=1 00:26:44.596 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:44.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.596 issued rwts: total=0,3657,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.596 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.596 job1: (groupid=0, jobs=1): err= 0: pid=794892: Mon Nov 18 08:00:36 2024 00:26:44.596 write: IOPS=372, BW=93.0MiB/s (97.5MB/s)(945MiB/10158msec); 0 zone resets 00:26:44.596 slat (usec): min=16, max=67551, avg=1094.17, stdev=5035.79 00:26:44.596 clat (usec): min=563, max=766966, avg=170793.64, stdev=160146.89 00:26:44.596 lat (usec): min=630, max=776512, avg=171887.81, stdev=161414.65 00:26:44.596 clat percentiles (usec): 00:26:44.596 | 1.00th=[ 1139], 5.00th=[ 3032], 10.00th=[ 7701], 20.00th=[ 19792], 00:26:44.596 | 30.00th=[ 60031], 40.00th=[ 90702], 50.00th=[119014], 60.00th=[170918], 00:26:44.596 | 70.00th=[235930], 80.00th=[304088], 90.00th=[396362], 95.00th=[455082], 00:26:44.596 | 99.00th=[692061], 99.50th=[734004], 99.90th=[759170], 99.95th=[767558], 00:26:44.596 | 99.99th=[767558] 00:26:44.596 bw ( KiB/s): min=25600, max=275968, per=10.71%, avg=95113.25, stdev=64226.52, samples=20 00:26:44.596 iops : min= 100, max= 1078, avg=371.45, stdev=250.88, samples=20 00:26:44.596 lat (usec) : 750=0.13%, 1000=0.61% 00:26:44.596 lat (msec) : 2=1.77%, 4=4.23%, 10=4.42%, 20=8.94%, 50=7.83% 00:26:44.596 lat (msec) : 100=17.31%, 250=26.65%, 500=24.08%, 750=3.78%, 1000=0.24% 00:26:44.596 cpu : usr=1.09%, sys=1.34%, ctx=2960, majf=0, minf=1 00:26:44.596 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:26:44.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.596 issued rwts: total=0,3779,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.596 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.596 job2: (groupid=0, jobs=1): err= 0: pid=794904: Mon Nov 18 08:00:36 2024 00:26:44.596 write: IOPS=363, BW=90.8MiB/s (95.2MB/s)(926MiB/10199msec); 0 zone resets 00:26:44.596 slat (usec): min=22, max=240765, avg=1936.12, stdev=9184.65 00:26:44.596 clat (msec): min=2, max=708, avg=174.18, stdev=128.02 00:26:44.596 lat (msec): min=2, max=708, avg=176.11, stdev=129.12 00:26:44.596 clat percentiles (msec): 00:26:44.596 | 1.00th=[ 11], 5.00th=[ 24], 10.00th=[ 51], 20.00th=[ 70], 00:26:44.596 | 30.00th=[ 89], 40.00th=[ 116], 50.00th=[ 148], 60.00th=[ 182], 00:26:44.596 | 70.00th=[ 218], 80.00th=[ 249], 90.00th=[ 342], 95.00th=[ 426], 00:26:44.596 | 99.00th=[ 651], 99.50th=[ 684], 99.90th=[ 709], 99.95th=[ 709], 00:26:44.596 | 99.99th=[ 709] 00:26:44.596 bw ( KiB/s): min=20480, max=224768, per=10.49%, avg=93186.90, stdev=46430.36, samples=20 00:26:44.596 iops : min= 80, max= 878, avg=363.95, stdev=181.40, samples=20 00:26:44.596 lat (msec) : 4=0.22%, 10=0.70%, 20=3.27%, 50=5.73%, 100=26.22% 00:26:44.596 lat (msec) : 250=44.40%, 500=16.23%, 750=3.24% 00:26:44.596 cpu : usr=1.02%, sys=1.26%, ctx=1618, majf=0, minf=1 00:26:44.596 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:44.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.596 issued rwts: total=0,3703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.596 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.596 job3: (groupid=0, jobs=1): err= 0: pid=794905: Mon Nov 18 08:00:36 2024 00:26:44.596 write: IOPS=327, BW=81.8MiB/s (85.7MB/s)(825MiB/10089msec); 0 zone resets 00:26:44.597 slat (usec): min=18, max=50535, avg=2274.12, stdev=6536.41 00:26:44.597 clat (usec): min=1188, max=644540, avg=193305.63, stdev=152110.87 00:26:44.597 lat (usec): min=1340, max=680790, avg=195579.74, stdev=154172.15 00:26:44.597 clat percentiles (msec): 00:26:44.597 | 1.00th=[ 4], 5.00th=[ 21], 10.00th=[ 33], 20.00th=[ 72], 00:26:44.597 | 30.00th=[ 95], 40.00th=[ 116], 50.00th=[ 142], 60.00th=[ 178], 00:26:44.597 | 70.00th=[ 241], 80.00th=[ 338], 90.00th=[ 418], 95.00th=[ 523], 00:26:44.597 | 99.00th=[ 609], 99.50th=[ 625], 99.90th=[ 634], 99.95th=[ 642], 00:26:44.597 | 99.99th=[ 642] 00:26:44.597 bw ( KiB/s): min=28672, max=205723, per=9.33%, avg=82875.45, stdev=50465.85, samples=20 00:26:44.597 iops : min= 112, max= 803, avg=323.70, stdev=197.05, samples=20 00:26:44.597 lat (msec) : 2=0.18%, 4=1.21%, 10=1.73%, 20=1.91%, 50=9.27% 00:26:44.597 lat (msec) : 100=18.03%, 250=39.91%, 500=22.27%, 750=5.48% 00:26:44.597 cpu : usr=0.91%, sys=1.00%, ctx=1705, majf=0, minf=1 00:26:44.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:44.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.597 issued rwts: total=0,3300,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.597 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.597 job4: (groupid=0, jobs=1): err= 0: pid=794906: Mon Nov 18 08:00:36 2024 00:26:44.597 write: IOPS=234, BW=58.5MiB/s (61.3MB/s)(596MiB/10187msec); 0 zone resets 00:26:44.597 slat (usec): min=21, max=172737, avg=2420.49, stdev=8844.65 00:26:44.597 clat (usec): min=822, max=783433, avg=270823.70, stdev=157451.84 00:26:44.597 lat (usec): min=934, max=783467, avg=273244.19, stdev=159322.79 00:26:44.597 clat percentiles (msec): 00:26:44.597 | 1.00th=[ 16], 5.00th=[ 36], 10.00th=[ 56], 20.00th=[ 142], 00:26:44.597 | 30.00th=[ 182], 40.00th=[ 213], 50.00th=[ 247], 60.00th=[ 300], 00:26:44.597 | 70.00th=[ 351], 80.00th=[ 405], 90.00th=[ 464], 95.00th=[ 542], 00:26:44.597 | 99.00th=[ 735], 99.50th=[ 768], 99.90th=[ 785], 99.95th=[ 785], 00:26:44.597 | 99.99th=[ 785] 00:26:44.597 bw ( KiB/s): min=20480, max=113664, per=6.69%, avg=59415.80, stdev=25071.71, samples=20 00:26:44.597 iops : min= 80, max= 444, avg=232.05, stdev=97.94, samples=20 00:26:44.597 lat (usec) : 1000=0.08% 00:26:44.597 lat (msec) : 2=0.17%, 4=0.08%, 10=0.29%, 20=1.09%, 50=6.54% 00:26:44.597 lat (msec) : 100=6.04%, 250=36.37%, 500=42.28%, 750=6.29%, 1000=0.76% 00:26:44.597 cpu : usr=0.69%, sys=0.85%, ctx=1547, majf=0, minf=1 00:26:44.597 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:44.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.597 issued rwts: total=0,2384,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.597 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.597 job5: (groupid=0, jobs=1): err= 0: pid=794908: Mon Nov 18 08:00:36 2024 00:26:44.597 write: IOPS=282, BW=70.5MiB/s (74.0MB/s)(716MiB/10151msec); 0 zone resets 00:26:44.597 slat (usec): min=14, max=65746, avg=2398.48, stdev=7291.46 00:26:44.597 clat (usec): min=737, max=766896, avg=224330.44, stdev=166688.50 00:26:44.597 lat (usec): min=768, max=776486, avg=226728.91, stdev=168794.19 00:26:44.597 clat percentiles (usec): 00:26:44.597 | 1.00th=[ 1860], 5.00th=[ 26346], 10.00th=[ 45351], 20.00th=[ 87557], 00:26:44.597 | 30.00th=[111674], 40.00th=[139461], 50.00th=[179307], 60.00th=[214959], 00:26:44.597 | 70.00th=[278922], 80.00th=[362808], 90.00th=[476054], 95.00th=[574620], 00:26:44.597 | 99.00th=[700449], 99.50th=[725615], 99.90th=[759170], 99.95th=[767558], 00:26:44.597 | 99.99th=[767558] 00:26:44.597 bw ( KiB/s): min=20480, max=159744, per=8.07%, avg=71708.90, stdev=31828.57, samples=20 00:26:44.597 iops : min= 80, max= 624, avg=280.05, stdev=124.34, samples=20 00:26:44.597 lat (usec) : 750=0.03%, 1000=0.03% 00:26:44.597 lat (msec) : 2=1.01%, 4=0.49%, 10=0.59%, 20=1.64%, 50=6.95% 00:26:44.597 lat (msec) : 100=14.14%, 250=42.39%, 500=23.78%, 750=8.73%, 1000=0.21% 00:26:44.597 cpu : usr=0.92%, sys=0.84%, ctx=1597, majf=0, minf=1 00:26:44.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:44.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.597 issued rwts: total=0,2864,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.597 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.597 job6: (groupid=0, jobs=1): err= 0: pid=794912: Mon Nov 18 08:00:36 2024 00:26:44.597 write: IOPS=277, BW=69.3MiB/s (72.7MB/s)(704MiB/10151msec); 0 zone resets 00:26:44.597 slat (usec): min=20, max=164830, avg=2803.86, stdev=8296.59 00:26:44.597 clat (msec): min=4, max=904, avg=227.94, stdev=175.02 00:26:44.597 lat (msec): min=4, max=904, avg=230.75, stdev=176.99 00:26:44.597 clat percentiles (msec): 00:26:44.597 | 1.00th=[ 13], 5.00th=[ 41], 10.00th=[ 81], 20.00th=[ 90], 00:26:44.597 | 30.00th=[ 96], 40.00th=[ 113], 50.00th=[ 153], 60.00th=[ 224], 00:26:44.597 | 70.00th=[ 313], 80.00th=[ 380], 90.00th=[ 451], 95.00th=[ 584], 00:26:44.597 | 99.00th=[ 760], 99.50th=[ 860], 99.90th=[ 902], 99.95th=[ 902], 00:26:44.597 | 99.99th=[ 902] 00:26:44.597 bw ( KiB/s): min=23040, max=160256, per=7.93%, avg=70413.75, stdev=42220.49, samples=20 00:26:44.597 iops : min= 90, max= 626, avg=275.00, stdev=164.95, samples=20 00:26:44.597 lat (msec) : 10=0.36%, 20=1.81%, 50=4.62%, 100=26.01%, 250=29.82% 00:26:44.597 lat (msec) : 500=29.64%, 750=6.68%, 1000=1.07% 00:26:44.597 cpu : usr=0.92%, sys=1.01%, ctx=1286, majf=0, minf=1 00:26:44.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:44.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.597 issued rwts: total=0,2814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.597 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.597 job7: (groupid=0, jobs=1): err= 0: pid=794913: Mon Nov 18 08:00:36 2024 00:26:44.597 write: IOPS=298, BW=74.7MiB/s (78.3MB/s)(761MiB/10189msec); 0 zone resets 00:26:44.597 slat (usec): min=16, max=193210, avg=1677.77, stdev=7284.56 00:26:44.597 clat (msec): min=3, max=786, avg=212.36, stdev=184.17 00:26:44.597 lat (msec): min=3, max=786, avg=214.04, stdev=185.56 00:26:44.597 clat percentiles (msec): 00:26:44.597 | 1.00th=[ 13], 5.00th=[ 23], 10.00th=[ 31], 20.00th=[ 43], 00:26:44.597 | 30.00th=[ 56], 40.00th=[ 77], 50.00th=[ 134], 60.00th=[ 257], 00:26:44.597 | 70.00th=[ 317], 80.00th=[ 397], 90.00th=[ 485], 95.00th=[ 535], 00:26:44.597 | 99.00th=[ 676], 99.50th=[ 693], 99.90th=[ 760], 99.95th=[ 776], 00:26:44.597 | 99.99th=[ 785] 00:26:44.597 bw ( KiB/s): min=22528, max=287232, per=8.59%, avg=76318.10, stdev=62239.33, samples=20 00:26:44.597 iops : min= 88, max= 1122, avg=298.05, stdev=243.16, samples=20 00:26:44.597 lat (msec) : 4=0.20%, 10=0.46%, 20=2.73%, 50=24.53%, 100=17.04% 00:26:44.597 lat (msec) : 250=14.06%, 500=33.00%, 750=7.85%, 1000=0.13% 00:26:44.597 cpu : usr=0.86%, sys=1.14%, ctx=1988, majf=0, minf=1 00:26:44.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:44.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.597 issued rwts: total=0,3045,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.597 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.597 job8: (groupid=0, jobs=1): err= 0: pid=794914: Mon Nov 18 08:00:36 2024 00:26:44.597 write: IOPS=334, BW=83.6MiB/s (87.6MB/s)(850MiB/10175msec); 0 zone resets 00:26:44.597 slat (usec): min=14, max=125708, avg=1494.92, stdev=6221.65 00:26:44.597 clat (usec): min=736, max=847592, avg=189873.95, stdev=177609.83 00:26:44.597 lat (usec): min=768, max=897809, avg=191368.87, stdev=179046.21 00:26:44.597 clat percentiles (msec): 00:26:44.597 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 8], 20.00th=[ 21], 00:26:44.597 | 30.00th=[ 74], 40.00th=[ 92], 50.00th=[ 124], 60.00th=[ 190], 00:26:44.597 | 70.00th=[ 275], 80.00th=[ 342], 90.00th=[ 439], 95.00th=[ 506], 00:26:44.597 | 99.00th=[ 760], 99.50th=[ 785], 99.90th=[ 844], 99.95th=[ 844], 00:26:44.597 | 99.99th=[ 852] 00:26:44.597 bw ( KiB/s): min=20480, max=262656, per=9.62%, avg=85446.65, stdev=53340.33, samples=20 00:26:44.597 iops : min= 80, max= 1026, avg=333.70, stdev=208.31, samples=20 00:26:44.597 lat (usec) : 750=0.03%, 1000=0.06% 00:26:44.597 lat (msec) : 2=0.59%, 4=4.09%, 10=8.41%, 20=6.88%, 50=5.79% 00:26:44.597 lat (msec) : 100=17.94%, 250=22.46%, 500=28.58%, 750=3.94%, 1000=1.23% 00:26:44.597 cpu : usr=1.06%, sys=1.14%, ctx=2431, majf=0, minf=2 00:26:44.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:26:44.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.597 issued rwts: total=0,3401,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.597 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.597 job9: (groupid=0, jobs=1): err= 0: pid=794915: Mon Nov 18 08:00:36 2024 00:26:44.597 write: IOPS=287, BW=71.9MiB/s (75.4MB/s)(734MiB/10202msec); 0 zone resets 00:26:44.597 slat (usec): min=18, max=113703, avg=2502.92, stdev=7239.46 00:26:44.597 clat (msec): min=3, max=772, avg=219.82, stdev=159.41 00:26:44.597 lat (msec): min=3, max=773, avg=222.33, stdev=161.24 00:26:44.597 clat percentiles (msec): 00:26:44.597 | 1.00th=[ 9], 5.00th=[ 21], 10.00th=[ 37], 20.00th=[ 88], 00:26:44.597 | 30.00th=[ 121], 40.00th=[ 161], 50.00th=[ 194], 60.00th=[ 224], 00:26:44.597 | 70.00th=[ 264], 80.00th=[ 313], 90.00th=[ 456], 95.00th=[ 558], 00:26:44.597 | 99.00th=[ 709], 99.50th=[ 751], 99.90th=[ 768], 99.95th=[ 776], 00:26:44.597 | 99.99th=[ 776] 00:26:44.597 bw ( KiB/s): min=22528, max=156359, per=8.28%, avg=73514.70, stdev=39541.65, samples=20 00:26:44.597 iops : min= 88, max= 610, avg=287.05, stdev=154.36, samples=20 00:26:44.597 lat (msec) : 4=0.17%, 10=1.09%, 20=3.61%, 50=8.48%, 100=10.43% 00:26:44.597 lat (msec) : 250=43.44%, 500=25.86%, 750=6.44%, 1000=0.48% 00:26:44.597 cpu : usr=0.80%, sys=1.18%, ctx=1586, majf=0, minf=2 00:26:44.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:44.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.597 issued rwts: total=0,2935,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.597 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.597 job10: (groupid=0, jobs=1): err= 0: pid=794916: Mon Nov 18 08:00:36 2024 00:26:44.597 write: IOPS=345, BW=86.4MiB/s (90.6MB/s)(880MiB/10187msec); 0 zone resets 00:26:44.598 slat (usec): min=15, max=58288, avg=2254.98, stdev=6240.15 00:26:44.598 clat (usec): min=1000, max=712187, avg=182806.50, stdev=142164.64 00:26:44.598 lat (usec): min=1065, max=712226, avg=185061.47, stdev=143714.02 00:26:44.598 clat percentiles (msec): 00:26:44.598 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 11], 20.00th=[ 68], 00:26:44.598 | 30.00th=[ 101], 40.00th=[ 136], 50.00th=[ 161], 60.00th=[ 188], 00:26:44.598 | 70.00th=[ 218], 80.00th=[ 266], 90.00th=[ 393], 95.00th=[ 485], 00:26:44.598 | 99.00th=[ 651], 99.50th=[ 693], 99.90th=[ 709], 99.95th=[ 709], 00:26:44.598 | 99.99th=[ 709] 00:26:44.598 bw ( KiB/s): min=22528, max=176128, per=9.96%, avg=88524.70, stdev=37476.87, samples=20 00:26:44.598 iops : min= 88, max= 688, avg=345.70, stdev=146.47, samples=20 00:26:44.598 lat (msec) : 2=0.45%, 4=2.44%, 10=6.90%, 20=4.06%, 50=4.86% 00:26:44.598 lat (msec) : 100=11.16%, 250=47.46%, 500=18.74%, 750=3.92% 00:26:44.598 cpu : usr=1.00%, sys=1.21%, ctx=1710, majf=0, minf=2 00:26:44.598 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:44.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.598 issued rwts: total=0,3521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.598 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.598 00:26:44.598 Run status group 0 (all jobs): 00:26:44.598 WRITE: bw=868MiB/s (910MB/s), 58.5MiB/s-93.0MiB/s (61.3MB/s-97.5MB/s), io=8851MiB (9281MB), run=10089-10202msec 00:26:44.598 00:26:44.598 Disk stats (read/write): 00:26:44.598 nvme0n1: ios=44/7285, merge=0/0, ticks=974/1240569, in_queue=1241543, util=99.76% 00:26:44.598 nvme10n1: ios=51/7404, merge=0/0, ticks=650/1211684, in_queue=1212334, util=99.94% 00:26:44.598 nvme1n1: ios=43/7356, merge=0/0, ticks=2429/1158702, in_queue=1161131, util=99.98% 00:26:44.598 nvme2n1: ios=13/6280, merge=0/0, ticks=341/1216668, in_queue=1217009, util=97.75% 00:26:44.598 nvme3n1: ios=43/4738, merge=0/0, ticks=1447/1247820, in_queue=1249267, util=99.99% 00:26:44.598 nvme4n1: ios=0/5496, merge=0/0, ticks=0/1220937, in_queue=1220937, util=98.05% 00:26:44.598 nvme5n1: ios=44/5400, merge=0/0, ticks=1749/1217532, in_queue=1219281, util=100.00% 00:26:44.598 nvme6n1: ios=0/6050, merge=0/0, ticks=0/1252603, in_queue=1252603, util=98.40% 00:26:44.598 nvme7n1: ios=0/6788, merge=0/0, ticks=0/1253796, in_queue=1253796, util=98.81% 00:26:44.598 nvme8n1: ios=0/5814, merge=0/0, ticks=0/1237004, in_queue=1237004, util=99.01% 00:26:44.598 nvme9n1: ios=0/7006, merge=0/0, ticks=0/1238024, in_queue=1238024, util=99.14% 00:26:44.598 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:44.598 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:44.598 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.598 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:44.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:44.598 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.598 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:44.857 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:44.857 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:44.857 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:45.115 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:45.115 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:45.115 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:45.115 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:45.115 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:45.115 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:45.115 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.115 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:45.115 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.115 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:45.115 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:45.373 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:45.373 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:45.373 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:45.373 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:45.373 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:45.373 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:45.373 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:45.373 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:45.373 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:45.373 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.373 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:45.373 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.373 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:45.373 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:45.634 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:45.634 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:45.634 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:45.634 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:45.634 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:45.634 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:45.634 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:45.634 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:45.634 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:45.634 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.634 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:45.634 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.634 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:45.634 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:45.894 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:45.894 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:45.894 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:45.895 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:45.895 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:45.895 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:45.895 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:45.895 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:45.895 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:45.895 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.895 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:45.895 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.895 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:45.895 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:46.154 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:46.154 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:46.154 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:46.154 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:46.154 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:46.154 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:46.154 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:46.154 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:46.154 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:46.154 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.154 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.154 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.154 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:46.154 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:46.154 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:46.154 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:46.154 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:46.154 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:46.154 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:46.154 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:46.154 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:46.154 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:46.154 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:46.154 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.154 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.154 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.154 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:46.154 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:46.413 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:46.413 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:46.413 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:46.672 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:46.672 rmmod nvme_tcp 00:26:46.672 rmmod nvme_fabrics 00:26:46.672 rmmod nvme_keyring 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 789271 ']' 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 789271 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 789271 ']' 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 789271 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 789271 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 789271' 00:26:46.672 killing process with pid 789271 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 789271 00:26:46.672 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 789271 00:26:47.239 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:47.239 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:47.239 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:47.239 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:47.239 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:47.239 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:47.239 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:47.239 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:47.239 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:47.239 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.239 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:47.239 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.149 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:49.149 00:26:49.149 real 1m1.280s 00:26:49.149 user 3m38.778s 00:26:49.149 sys 0m14.476s 00:26:49.149 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:49.149 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:49.149 ************************************ 00:26:49.149 END TEST nvmf_multiconnection 00:26:49.149 ************************************ 00:26:49.149 08:00:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:49.149 08:00:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:49.149 08:00:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:49.149 08:00:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:49.407 ************************************ 00:26:49.407 START TEST nvmf_initiator_timeout 00:26:49.407 ************************************ 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:49.407 * Looking for test storage... 00:26:49.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:49.407 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:49.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.408 --rc genhtml_branch_coverage=1 00:26:49.408 --rc genhtml_function_coverage=1 00:26:49.408 --rc genhtml_legend=1 00:26:49.408 --rc geninfo_all_blocks=1 00:26:49.408 --rc geninfo_unexecuted_blocks=1 00:26:49.408 00:26:49.408 ' 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:49.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.408 --rc genhtml_branch_coverage=1 00:26:49.408 --rc genhtml_function_coverage=1 00:26:49.408 --rc genhtml_legend=1 00:26:49.408 --rc geninfo_all_blocks=1 00:26:49.408 --rc geninfo_unexecuted_blocks=1 00:26:49.408 00:26:49.408 ' 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:49.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.408 --rc genhtml_branch_coverage=1 00:26:49.408 --rc genhtml_function_coverage=1 00:26:49.408 --rc genhtml_legend=1 00:26:49.408 --rc geninfo_all_blocks=1 00:26:49.408 --rc geninfo_unexecuted_blocks=1 00:26:49.408 00:26:49.408 ' 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:49.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.408 --rc genhtml_branch_coverage=1 00:26:49.408 --rc genhtml_function_coverage=1 00:26:49.408 --rc genhtml_legend=1 00:26:49.408 --rc geninfo_all_blocks=1 00:26:49.408 --rc geninfo_unexecuted_blocks=1 00:26:49.408 00:26:49.408 ' 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:49.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:49.408 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:51.944 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:51.944 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:51.944 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:51.944 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.944 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:51.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:26:51.945 00:26:51.945 --- 10.0.0.2 ping statistics --- 00:26:51.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.945 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:51.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:26:51.945 00:26:51.945 --- 10.0.0.1 ping statistics --- 00:26:51.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.945 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=798109 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 798109 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 798109 ']' 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.945 [2024-11-18 08:00:44.701686] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:26:51.945 [2024-11-18 08:00:44.701788] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.945 [2024-11-18 08:00:44.774570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:51.945 [2024-11-18 08:00:44.818502] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.945 [2024-11-18 08:00:44.818560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.945 [2024-11-18 08:00:44.818582] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.945 [2024-11-18 08:00:44.818593] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.945 [2024-11-18 08:00:44.818602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.945 [2024-11-18 08:00:44.820147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.945 [2024-11-18 08:00:44.820213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.945 [2024-11-18 08:00:44.820281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.945 [2024-11-18 08:00:44.820285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.945 Malloc0 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.945 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.945 Delay0 00:26:51.945 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.945 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:51.945 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.945 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.945 [2024-11-18 08:00:45.010686] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.945 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.945 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:51.945 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.945 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.945 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.945 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:51.945 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.945 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.204 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.204 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:52.204 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.204 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.204 [2024-11-18 08:00:45.039010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.204 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.204 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:52.771 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:52.771 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:52.771 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:52.771 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:52.771 08:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:54.679 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:54.679 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:54.679 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:54.679 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:54.679 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:54.679 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:54.679 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=798531 00:26:54.679 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:54.679 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:54.679 [global] 00:26:54.679 thread=1 00:26:54.679 invalidate=1 00:26:54.679 rw=write 00:26:54.679 time_based=1 00:26:54.679 runtime=60 00:26:54.679 ioengine=libaio 00:26:54.679 direct=1 00:26:54.679 bs=4096 00:26:54.679 iodepth=1 00:26:54.679 norandommap=0 00:26:54.679 numjobs=1 00:26:54.679 00:26:54.679 verify_dump=1 00:26:54.679 verify_backlog=512 00:26:54.679 verify_state_save=0 00:26:54.679 do_verify=1 00:26:54.679 verify=crc32c-intel 00:26:54.679 [job0] 00:26:54.679 filename=/dev/nvme0n1 00:26:54.679 Could not set queue depth (nvme0n1) 00:26:54.938 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:54.938 fio-3.35 00:26:54.938 Starting 1 thread 00:26:58.241 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:58.241 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.241 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.241 true 00:26:58.241 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.241 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:58.241 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.241 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.241 true 00:26:58.241 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.241 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:58.241 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.241 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.241 true 00:26:58.241 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.241 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:58.241 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.241 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.241 true 00:26:58.241 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.241 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:00.778 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:00.778 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.778 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.778 true 00:27:00.778 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.778 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:00.778 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.778 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.778 true 00:27:00.778 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.778 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:00.778 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.778 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.778 true 00:27:00.778 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.778 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:00.778 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.778 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.778 true 00:27:00.778 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.778 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:00.778 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 798531 00:27:57.022 00:27:57.022 job0: (groupid=0, jobs=1): err= 0: pid=798600: Mon Nov 18 08:01:48 2024 00:27:57.022 read: IOPS=39, BW=157KiB/s (161kB/s)(9440KiB/60017msec) 00:27:57.022 slat (usec): min=5, max=16227, avg=27.14, stdev=461.53 00:27:57.022 clat (usec): min=221, max=41009k, avg=25141.18, stdev=844136.45 00:27:57.022 lat (usec): min=229, max=41009k, avg=25168.32, stdev=844136.34 00:27:57.022 clat percentiles (usec): 00:27:57.022 | 1.00th=[ 227], 5.00th=[ 237], 10.00th=[ 241], 00:27:57.022 | 20.00th=[ 247], 30.00th=[ 253], 40.00th=[ 258], 00:27:57.022 | 50.00th=[ 265], 60.00th=[ 277], 70.00th=[ 310], 00:27:57.022 | 80.00th=[ 490], 90.00th=[ 41157], 95.00th=[ 41157], 00:27:57.022 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 41681], 00:27:57.022 | 99.95th=[ 42206], 99.99th=[17112761] 00:27:57.022 write: IOPS=42, BW=171KiB/s (175kB/s)(10.0MiB/60017msec); 0 zone resets 00:27:57.022 slat (usec): min=7, max=31473, avg=25.73, stdev=621.84 00:27:57.022 clat (usec): min=164, max=443, avg=209.05, stdev=33.11 00:27:57.022 lat (usec): min=171, max=31824, avg=234.78, stdev=625.78 00:27:57.022 clat percentiles (usec): 00:27:57.022 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 186], 00:27:57.022 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 204], 00:27:57.022 | 70.00th=[ 215], 80.00th=[ 229], 90.00th=[ 255], 95.00th=[ 273], 00:27:57.022 | 99.00th=[ 334], 99.50th=[ 355], 99.90th=[ 420], 99.95th=[ 441], 00:27:57.022 | 99.99th=[ 445] 00:27:57.022 bw ( KiB/s): min= 1000, max= 4096, per=100.00%, avg=3413.33, stdev=1248.12, samples=6 00:27:57.022 iops : min= 250, max= 1024, avg=853.33, stdev=312.03, samples=6 00:27:57.022 lat (usec) : 250=58.76%, 500=31.91%, 750=0.41% 00:27:57.022 lat (msec) : 2=0.06%, 4=0.02%, 50=8.82%, >=2000=0.02% 00:27:57.022 cpu : usr=0.10%, sys=0.13%, ctx=4924, majf=0, minf=1 00:27:57.022 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.022 issued rwts: total=2360,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.022 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:57.022 00:27:57.022 Run status group 0 (all jobs): 00:27:57.022 READ: bw=157KiB/s (161kB/s), 157KiB/s-157KiB/s (161kB/s-161kB/s), io=9440KiB (9667kB), run=60017-60017msec 00:27:57.022 WRITE: bw=171KiB/s (175kB/s), 171KiB/s-171KiB/s (175kB/s-175kB/s), io=10.0MiB (10.5MB), run=60017-60017msec 00:27:57.022 00:27:57.022 Disk stats (read/write): 00:27:57.022 nvme0n1: ios=2409/2560, merge=0/0, ticks=19503/510, in_queue=20013, util=99.89% 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:57.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:57.022 nvmf hotplug test: fio successful as expected 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:57.022 rmmod nvme_tcp 00:27:57.022 rmmod nvme_fabrics 00:27:57.022 rmmod nvme_keyring 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 798109 ']' 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 798109 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 798109 ']' 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 798109 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 798109 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 798109' 00:27:57.022 killing process with pid 798109 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 798109 00:27:57.022 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 798109 00:27:57.023 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:57.023 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:57.023 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:57.023 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:57.023 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:57.023 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:57.023 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:57.023 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:57.023 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:57.023 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.023 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:57.023 08:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.591 08:01:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:57.591 00:27:57.591 real 1m8.255s 00:27:57.591 user 4m10.962s 00:27:57.591 sys 0m6.306s 00:27:57.591 08:01:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:57.591 08:01:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:57.591 ************************************ 00:27:57.591 END TEST nvmf_initiator_timeout 00:27:57.591 ************************************ 00:27:57.591 08:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:57.591 08:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:57.591 08:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:57.591 08:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:57.591 08:01:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:00.126 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.126 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:00.127 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:00.127 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:00.127 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:00.127 ************************************ 00:28:00.127 START TEST nvmf_perf_adq 00:28:00.127 ************************************ 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:00.127 * Looking for test storage... 00:28:00.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:00.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.127 --rc genhtml_branch_coverage=1 00:28:00.127 --rc genhtml_function_coverage=1 00:28:00.127 --rc genhtml_legend=1 00:28:00.127 --rc geninfo_all_blocks=1 00:28:00.127 --rc geninfo_unexecuted_blocks=1 00:28:00.127 00:28:00.127 ' 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:00.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.127 --rc genhtml_branch_coverage=1 00:28:00.127 --rc genhtml_function_coverage=1 00:28:00.127 --rc genhtml_legend=1 00:28:00.127 --rc geninfo_all_blocks=1 00:28:00.127 --rc geninfo_unexecuted_blocks=1 00:28:00.127 00:28:00.127 ' 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:00.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.127 --rc genhtml_branch_coverage=1 00:28:00.127 --rc genhtml_function_coverage=1 00:28:00.127 --rc genhtml_legend=1 00:28:00.127 --rc geninfo_all_blocks=1 00:28:00.127 --rc geninfo_unexecuted_blocks=1 00:28:00.127 00:28:00.127 ' 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:00.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.127 --rc genhtml_branch_coverage=1 00:28:00.127 --rc genhtml_function_coverage=1 00:28:00.127 --rc genhtml_legend=1 00:28:00.127 --rc geninfo_all_blocks=1 00:28:00.127 --rc geninfo_unexecuted_blocks=1 00:28:00.127 00:28:00.127 ' 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:00.127 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:00.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:00.128 08:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:02.035 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:02.035 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:02.036 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:02.036 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:02.036 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:02.036 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:02.605 08:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:05.139 08:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:10.472 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:10.472 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:10.472 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:10.473 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:10.473 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:10.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:28:10.473 00:28:10.473 --- 10.0.0.2 ping statistics --- 00:28:10.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.473 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:10.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:28:10.473 00:28:10.473 --- 10.0.0.1 ping statistics --- 00:28:10.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.473 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=810250 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 810250 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 810250 ']' 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:10.473 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:10.473 [2024-11-18 08:02:03.398598] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:10.473 [2024-11-18 08:02:03.398687] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.473 [2024-11-18 08:02:03.477679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:10.473 [2024-11-18 08:02:03.526567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.473 [2024-11-18 08:02:03.526629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.473 [2024-11-18 08:02:03.526643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.473 [2024-11-18 08:02:03.526654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.473 [2024-11-18 08:02:03.526664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.473 [2024-11-18 08:02:03.528228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.473 [2024-11-18 08:02:03.528292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:10.473 [2024-11-18 08:02:03.528323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:10.473 [2024-11-18 08:02:03.528325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.733 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:10.733 [2024-11-18 08:02:03.819602] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.994 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.994 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:10.994 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.994 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:10.994 Malloc1 00:28:10.994 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.994 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:10.994 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.994 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:10.994 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.994 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:10.994 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.994 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:10.994 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.994 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:10.994 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.994 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:10.994 [2024-11-18 08:02:03.882381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.994 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.994 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=810285 00:28:10.994 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:10.994 08:02:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:12.898 08:02:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:12.898 08:02:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.898 08:02:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.898 08:02:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.898 08:02:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:12.898 "tick_rate": 2700000000, 00:28:12.898 "poll_groups": [ 00:28:12.898 { 00:28:12.898 "name": "nvmf_tgt_poll_group_000", 00:28:12.898 "admin_qpairs": 1, 00:28:12.898 "io_qpairs": 1, 00:28:12.898 "current_admin_qpairs": 1, 00:28:12.898 "current_io_qpairs": 1, 00:28:12.898 "pending_bdev_io": 0, 00:28:12.898 "completed_nvme_io": 19596, 00:28:12.898 "transports": [ 00:28:12.898 { 00:28:12.898 "trtype": "TCP" 00:28:12.898 } 00:28:12.898 ] 00:28:12.898 }, 00:28:12.898 { 00:28:12.898 "name": "nvmf_tgt_poll_group_001", 00:28:12.898 "admin_qpairs": 0, 00:28:12.898 "io_qpairs": 1, 00:28:12.898 "current_admin_qpairs": 0, 00:28:12.898 "current_io_qpairs": 1, 00:28:12.898 "pending_bdev_io": 0, 00:28:12.898 "completed_nvme_io": 20038, 00:28:12.898 "transports": [ 00:28:12.898 { 00:28:12.898 "trtype": "TCP" 00:28:12.898 } 00:28:12.898 ] 00:28:12.898 }, 00:28:12.898 { 00:28:12.898 "name": "nvmf_tgt_poll_group_002", 00:28:12.898 "admin_qpairs": 0, 00:28:12.898 "io_qpairs": 1, 00:28:12.898 "current_admin_qpairs": 0, 00:28:12.898 "current_io_qpairs": 1, 00:28:12.898 "pending_bdev_io": 0, 00:28:12.898 "completed_nvme_io": 20093, 00:28:12.898 "transports": [ 00:28:12.898 { 00:28:12.898 "trtype": "TCP" 00:28:12.898 } 00:28:12.898 ] 00:28:12.898 }, 00:28:12.898 { 00:28:12.898 "name": "nvmf_tgt_poll_group_003", 00:28:12.898 "admin_qpairs": 0, 00:28:12.898 "io_qpairs": 1, 00:28:12.898 "current_admin_qpairs": 0, 00:28:12.898 "current_io_qpairs": 1, 00:28:12.898 "pending_bdev_io": 0, 00:28:12.898 "completed_nvme_io": 19121, 00:28:12.898 "transports": [ 00:28:12.898 { 00:28:12.898 "trtype": "TCP" 00:28:12.898 } 00:28:12.898 ] 00:28:12.898 } 00:28:12.898 ] 00:28:12.898 }' 00:28:12.898 08:02:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:12.898 08:02:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:12.898 08:02:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:12.899 08:02:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:12.899 08:02:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 810285 00:28:21.016 Initializing NVMe Controllers 00:28:21.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:21.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:21.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:21.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:21.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:21.016 Initialization complete. Launching workers. 00:28:21.016 ======================================================== 00:28:21.016 Latency(us) 00:28:21.016 Device Information : IOPS MiB/s Average min max 00:28:21.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10089.00 39.41 6345.37 2295.70 11309.22 00:28:21.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10622.50 41.49 6024.49 2575.44 9800.32 00:28:21.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10640.90 41.57 6015.77 2552.47 10142.88 00:28:21.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10389.80 40.59 6160.14 2380.54 10315.48 00:28:21.016 ======================================================== 00:28:21.016 Total : 41742.19 163.06 6133.59 2295.70 11309.22 00:28:21.016 00:28:21.016 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:21.016 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:21.016 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:21.016 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:21.016 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:21.016 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:21.016 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:21.016 rmmod nvme_tcp 00:28:21.016 rmmod nvme_fabrics 00:28:21.016 rmmod nvme_keyring 00:28:21.016 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:21.016 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:21.016 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:21.016 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 810250 ']' 00:28:21.016 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 810250 00:28:21.016 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 810250 ']' 00:28:21.016 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 810250 00:28:21.016 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:21.016 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:21.016 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 810250 00:28:21.275 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:21.276 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:21.276 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 810250' 00:28:21.276 killing process with pid 810250 00:28:21.276 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 810250 00:28:21.276 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 810250 00:28:21.276 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:21.276 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:21.276 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:21.276 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:21.276 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:21.276 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:21.276 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:21.276 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:21.276 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:21.276 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.276 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:21.276 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.814 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:23.814 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:23.814 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:23.814 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:24.073 08:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:26.609 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:31.887 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:31.887 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.887 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:31.888 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:31.888 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:31.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:31.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:28:31.888 00:28:31.888 --- 10.0.0.2 ping statistics --- 00:28:31.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.888 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:31.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:31.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:28:31.888 00:28:31.888 --- 10.0.0.1 ping statistics --- 00:28:31.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.888 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:31.888 net.core.busy_poll = 1 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:31.888 net.core.busy_read = 1 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=812918 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 812918 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 812918 ']' 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.888 [2024-11-18 08:02:24.720224] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:31.888 [2024-11-18 08:02:24.720314] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:31.888 [2024-11-18 08:02:24.794722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:31.888 [2024-11-18 08:02:24.843717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:31.888 [2024-11-18 08:02:24.843775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:31.888 [2024-11-18 08:02:24.843798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:31.888 [2024-11-18 08:02:24.843809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:31.888 [2024-11-18 08:02:24.843819] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:31.888 [2024-11-18 08:02:24.845297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.888 [2024-11-18 08:02:24.845359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:31.888 [2024-11-18 08:02:24.845425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:31.888 [2024-11-18 08:02:24.845428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:31.888 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:31.889 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.147 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.147 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:32.147 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:32.147 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:32.147 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.147 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.147 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.147 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:32.147 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:32.147 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.147 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.147 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.147 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:32.147 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.147 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.147 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.147 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.148 [2024-11-18 08:02:25.120289] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.148 Malloc1 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.148 [2024-11-18 08:02:25.182311] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=813062 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:32.148 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:34.682 08:02:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:34.682 08:02:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.682 08:02:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.682 08:02:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.682 08:02:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:34.682 "tick_rate": 2700000000, 00:28:34.682 "poll_groups": [ 00:28:34.682 { 00:28:34.682 "name": "nvmf_tgt_poll_group_000", 00:28:34.682 "admin_qpairs": 1, 00:28:34.682 "io_qpairs": 2, 00:28:34.682 "current_admin_qpairs": 1, 00:28:34.682 "current_io_qpairs": 2, 00:28:34.682 "pending_bdev_io": 0, 00:28:34.682 "completed_nvme_io": 24259, 00:28:34.682 "transports": [ 00:28:34.682 { 00:28:34.682 "trtype": "TCP" 00:28:34.682 } 00:28:34.682 ] 00:28:34.682 }, 00:28:34.682 { 00:28:34.682 "name": "nvmf_tgt_poll_group_001", 00:28:34.682 "admin_qpairs": 0, 00:28:34.682 "io_qpairs": 2, 00:28:34.682 "current_admin_qpairs": 0, 00:28:34.682 "current_io_qpairs": 2, 00:28:34.682 "pending_bdev_io": 0, 00:28:34.682 "completed_nvme_io": 24071, 00:28:34.682 "transports": [ 00:28:34.682 { 00:28:34.682 "trtype": "TCP" 00:28:34.682 } 00:28:34.682 ] 00:28:34.682 }, 00:28:34.682 { 00:28:34.682 "name": "nvmf_tgt_poll_group_002", 00:28:34.682 "admin_qpairs": 0, 00:28:34.682 "io_qpairs": 0, 00:28:34.682 "current_admin_qpairs": 0, 00:28:34.682 "current_io_qpairs": 0, 00:28:34.682 "pending_bdev_io": 0, 00:28:34.682 "completed_nvme_io": 0, 00:28:34.682 "transports": [ 00:28:34.682 { 00:28:34.682 "trtype": "TCP" 00:28:34.682 } 00:28:34.682 ] 00:28:34.682 }, 00:28:34.682 { 00:28:34.682 "name": "nvmf_tgt_poll_group_003", 00:28:34.682 "admin_qpairs": 0, 00:28:34.682 "io_qpairs": 0, 00:28:34.682 "current_admin_qpairs": 0, 00:28:34.682 "current_io_qpairs": 0, 00:28:34.682 "pending_bdev_io": 0, 00:28:34.682 "completed_nvme_io": 0, 00:28:34.682 "transports": [ 00:28:34.682 { 00:28:34.682 "trtype": "TCP" 00:28:34.682 } 00:28:34.682 ] 00:28:34.682 } 00:28:34.682 ] 00:28:34.682 }' 00:28:34.682 08:02:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:34.682 08:02:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:34.682 08:02:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:34.682 08:02:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:34.682 08:02:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 813062 00:28:42.802 Initializing NVMe Controllers 00:28:42.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:42.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:42.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:42.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:42.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:42.802 Initialization complete. Launching workers. 00:28:42.802 ======================================================== 00:28:42.802 Latency(us) 00:28:42.802 Device Information : IOPS MiB/s Average min max 00:28:42.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8918.90 34.84 7176.48 1450.45 54281.28 00:28:42.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6593.80 25.76 9736.19 1868.81 54155.03 00:28:42.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6530.20 25.51 9836.62 1654.06 54479.35 00:28:42.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4665.00 18.22 13721.00 2093.18 55207.44 00:28:42.803 ======================================================== 00:28:42.803 Total : 26707.89 104.33 9601.97 1450.45 55207.44 00:28:42.803 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:42.803 rmmod nvme_tcp 00:28:42.803 rmmod nvme_fabrics 00:28:42.803 rmmod nvme_keyring 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 812918 ']' 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 812918 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 812918 ']' 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 812918 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 812918 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 812918' 00:28:42.803 killing process with pid 812918 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 812918 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 812918 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.803 08:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.097 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:46.097 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:46.097 00:28:46.097 real 0m46.174s 00:28:46.097 user 2m40.617s 00:28:46.097 sys 0m9.185s 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:46.098 ************************************ 00:28:46.098 END TEST nvmf_perf_adq 00:28:46.098 ************************************ 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:46.098 ************************************ 00:28:46.098 START TEST nvmf_shutdown 00:28:46.098 ************************************ 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:46.098 * Looking for test storage... 00:28:46.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:46.098 08:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:46.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.098 --rc genhtml_branch_coverage=1 00:28:46.098 --rc genhtml_function_coverage=1 00:28:46.098 --rc genhtml_legend=1 00:28:46.098 --rc geninfo_all_blocks=1 00:28:46.098 --rc geninfo_unexecuted_blocks=1 00:28:46.098 00:28:46.098 ' 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:46.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.098 --rc genhtml_branch_coverage=1 00:28:46.098 --rc genhtml_function_coverage=1 00:28:46.098 --rc genhtml_legend=1 00:28:46.098 --rc geninfo_all_blocks=1 00:28:46.098 --rc geninfo_unexecuted_blocks=1 00:28:46.098 00:28:46.098 ' 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:46.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.098 --rc genhtml_branch_coverage=1 00:28:46.098 --rc genhtml_function_coverage=1 00:28:46.098 --rc genhtml_legend=1 00:28:46.098 --rc geninfo_all_blocks=1 00:28:46.098 --rc geninfo_unexecuted_blocks=1 00:28:46.098 00:28:46.098 ' 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:46.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.098 --rc genhtml_branch_coverage=1 00:28:46.098 --rc genhtml_function_coverage=1 00:28:46.098 --rc genhtml_legend=1 00:28:46.098 --rc geninfo_all_blocks=1 00:28:46.098 --rc geninfo_unexecuted_blocks=1 00:28:46.098 00:28:46.098 ' 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.098 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:46.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:46.099 ************************************ 00:28:46.099 START TEST nvmf_shutdown_tc1 00:28:46.099 ************************************ 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:46.099 08:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:48.001 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.001 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.260 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.260 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.260 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:48.260 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:48.260 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.260 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.260 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.260 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.260 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.260 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:48.261 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:48.261 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:48.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:28:48.261 00:28:48.261 --- 10.0.0.2 ping statistics --- 00:28:48.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.261 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:48.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:28:48.261 00:28:48.261 --- 10.0.0.1 ping statistics --- 00:28:48.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.261 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=816364 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 816364 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 816364 ']' 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:48.261 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:48.261 [2024-11-18 08:02:41.300676] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:48.261 [2024-11-18 08:02:41.300756] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.520 [2024-11-18 08:02:41.373284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:48.520 [2024-11-18 08:02:41.417748] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.520 [2024-11-18 08:02:41.417808] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.520 [2024-11-18 08:02:41.417830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.520 [2024-11-18 08:02:41.417841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.520 [2024-11-18 08:02:41.417850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.520 [2024-11-18 08:02:41.419522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.520 [2024-11-18 08:02:41.419606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:48.520 [2024-11-18 08:02:41.419672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:48.520 [2024-11-18 08:02:41.419675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:48.520 [2024-11-18 08:02:41.562524] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.520 08:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:48.780 Malloc1 00:28:48.780 [2024-11-18 08:02:41.669400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:48.780 Malloc2 00:28:48.780 Malloc3 00:28:48.780 Malloc4 00:28:48.780 Malloc5 00:28:49.039 Malloc6 00:28:49.039 Malloc7 00:28:49.039 Malloc8 00:28:49.039 Malloc9 00:28:49.039 Malloc10 00:28:49.039 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.039 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:49.039 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:49.039 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:49.298 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=816540 00:28:49.298 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 816540 /var/tmp/bdevperf.sock 00:28:49.298 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 816540 ']' 00:28:49.298 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:49.298 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:49.298 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:49.298 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:49.298 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:49.298 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:49.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:49.298 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:49.298 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:49.298 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:49.298 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:49.298 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:49.298 { 00:28:49.298 "params": { 00:28:49.298 "name": "Nvme$subsystem", 00:28:49.298 "trtype": "$TEST_TRANSPORT", 00:28:49.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:49.298 "adrfam": "ipv4", 00:28:49.298 "trsvcid": "$NVMF_PORT", 00:28:49.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:49.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:49.298 "hdgst": ${hdgst:-false}, 00:28:49.298 "ddgst": ${ddgst:-false} 00:28:49.298 }, 00:28:49.298 "method": "bdev_nvme_attach_controller" 00:28:49.298 } 00:28:49.298 EOF 00:28:49.298 )") 00:28:49.298 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:49.298 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:49.298 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:49.298 { 00:28:49.298 "params": { 00:28:49.298 "name": "Nvme$subsystem", 00:28:49.298 "trtype": "$TEST_TRANSPORT", 00:28:49.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:49.298 "adrfam": "ipv4", 00:28:49.299 "trsvcid": "$NVMF_PORT", 00:28:49.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:49.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:49.299 "hdgst": ${hdgst:-false}, 00:28:49.299 "ddgst": ${ddgst:-false} 00:28:49.299 }, 00:28:49.299 "method": "bdev_nvme_attach_controller" 00:28:49.299 } 00:28:49.299 EOF 00:28:49.299 )") 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:49.299 { 00:28:49.299 "params": { 00:28:49.299 "name": "Nvme$subsystem", 00:28:49.299 "trtype": "$TEST_TRANSPORT", 00:28:49.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:49.299 "adrfam": "ipv4", 00:28:49.299 "trsvcid": "$NVMF_PORT", 00:28:49.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:49.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:49.299 "hdgst": ${hdgst:-false}, 00:28:49.299 "ddgst": ${ddgst:-false} 00:28:49.299 }, 00:28:49.299 "method": "bdev_nvme_attach_controller" 00:28:49.299 } 00:28:49.299 EOF 00:28:49.299 )") 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:49.299 { 00:28:49.299 "params": { 00:28:49.299 "name": "Nvme$subsystem", 00:28:49.299 "trtype": "$TEST_TRANSPORT", 00:28:49.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:49.299 "adrfam": "ipv4", 00:28:49.299 "trsvcid": "$NVMF_PORT", 00:28:49.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:49.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:49.299 "hdgst": ${hdgst:-false}, 00:28:49.299 "ddgst": ${ddgst:-false} 00:28:49.299 }, 00:28:49.299 "method": "bdev_nvme_attach_controller" 00:28:49.299 } 00:28:49.299 EOF 00:28:49.299 )") 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:49.299 { 00:28:49.299 "params": { 00:28:49.299 "name": "Nvme$subsystem", 00:28:49.299 "trtype": "$TEST_TRANSPORT", 00:28:49.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:49.299 "adrfam": "ipv4", 00:28:49.299 "trsvcid": "$NVMF_PORT", 00:28:49.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:49.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:49.299 "hdgst": ${hdgst:-false}, 00:28:49.299 "ddgst": ${ddgst:-false} 00:28:49.299 }, 00:28:49.299 "method": "bdev_nvme_attach_controller" 00:28:49.299 } 00:28:49.299 EOF 00:28:49.299 )") 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:49.299 { 00:28:49.299 "params": { 00:28:49.299 "name": "Nvme$subsystem", 00:28:49.299 "trtype": "$TEST_TRANSPORT", 00:28:49.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:49.299 "adrfam": "ipv4", 00:28:49.299 "trsvcid": "$NVMF_PORT", 00:28:49.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:49.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:49.299 "hdgst": ${hdgst:-false}, 00:28:49.299 "ddgst": ${ddgst:-false} 00:28:49.299 }, 00:28:49.299 "method": "bdev_nvme_attach_controller" 00:28:49.299 } 00:28:49.299 EOF 00:28:49.299 )") 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:49.299 { 00:28:49.299 "params": { 00:28:49.299 "name": "Nvme$subsystem", 00:28:49.299 "trtype": "$TEST_TRANSPORT", 00:28:49.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:49.299 "adrfam": "ipv4", 00:28:49.299 "trsvcid": "$NVMF_PORT", 00:28:49.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:49.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:49.299 "hdgst": ${hdgst:-false}, 00:28:49.299 "ddgst": ${ddgst:-false} 00:28:49.299 }, 00:28:49.299 "method": "bdev_nvme_attach_controller" 00:28:49.299 } 00:28:49.299 EOF 00:28:49.299 )") 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:49.299 { 00:28:49.299 "params": { 00:28:49.299 "name": "Nvme$subsystem", 00:28:49.299 "trtype": "$TEST_TRANSPORT", 00:28:49.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:49.299 "adrfam": "ipv4", 00:28:49.299 "trsvcid": "$NVMF_PORT", 00:28:49.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:49.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:49.299 "hdgst": ${hdgst:-false}, 00:28:49.299 "ddgst": ${ddgst:-false} 00:28:49.299 }, 00:28:49.299 "method": "bdev_nvme_attach_controller" 00:28:49.299 } 00:28:49.299 EOF 00:28:49.299 )") 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:49.299 { 00:28:49.299 "params": { 00:28:49.299 "name": "Nvme$subsystem", 00:28:49.299 "trtype": "$TEST_TRANSPORT", 00:28:49.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:49.299 "adrfam": "ipv4", 00:28:49.299 "trsvcid": "$NVMF_PORT", 00:28:49.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:49.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:49.299 "hdgst": ${hdgst:-false}, 00:28:49.299 "ddgst": ${ddgst:-false} 00:28:49.299 }, 00:28:49.299 "method": "bdev_nvme_attach_controller" 00:28:49.299 } 00:28:49.299 EOF 00:28:49.299 )") 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:49.299 { 00:28:49.299 "params": { 00:28:49.299 "name": "Nvme$subsystem", 00:28:49.299 "trtype": "$TEST_TRANSPORT", 00:28:49.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:49.299 "adrfam": "ipv4", 00:28:49.299 "trsvcid": "$NVMF_PORT", 00:28:49.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:49.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:49.299 "hdgst": ${hdgst:-false}, 00:28:49.299 "ddgst": ${ddgst:-false} 00:28:49.299 }, 00:28:49.299 "method": "bdev_nvme_attach_controller" 00:28:49.299 } 00:28:49.299 EOF 00:28:49.299 )") 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:49.299 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:49.299 "params": { 00:28:49.299 "name": "Nvme1", 00:28:49.299 "trtype": "tcp", 00:28:49.299 "traddr": "10.0.0.2", 00:28:49.299 "adrfam": "ipv4", 00:28:49.299 "trsvcid": "4420", 00:28:49.299 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:49.299 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:49.299 "hdgst": false, 00:28:49.299 "ddgst": false 00:28:49.299 }, 00:28:49.299 "method": "bdev_nvme_attach_controller" 00:28:49.299 },{ 00:28:49.299 "params": { 00:28:49.299 "name": "Nvme2", 00:28:49.299 "trtype": "tcp", 00:28:49.299 "traddr": "10.0.0.2", 00:28:49.299 "adrfam": "ipv4", 00:28:49.299 "trsvcid": "4420", 00:28:49.299 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:49.299 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:49.299 "hdgst": false, 00:28:49.299 "ddgst": false 00:28:49.299 }, 00:28:49.299 "method": "bdev_nvme_attach_controller" 00:28:49.299 },{ 00:28:49.299 "params": { 00:28:49.299 "name": "Nvme3", 00:28:49.299 "trtype": "tcp", 00:28:49.299 "traddr": "10.0.0.2", 00:28:49.299 "adrfam": "ipv4", 00:28:49.299 "trsvcid": "4420", 00:28:49.299 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:49.299 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:49.299 "hdgst": false, 00:28:49.299 "ddgst": false 00:28:49.299 }, 00:28:49.299 "method": "bdev_nvme_attach_controller" 00:28:49.299 },{ 00:28:49.299 "params": { 00:28:49.299 "name": "Nvme4", 00:28:49.299 "trtype": "tcp", 00:28:49.299 "traddr": "10.0.0.2", 00:28:49.299 "adrfam": "ipv4", 00:28:49.299 "trsvcid": "4420", 00:28:49.299 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:49.300 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:49.300 "hdgst": false, 00:28:49.300 "ddgst": false 00:28:49.300 }, 00:28:49.300 "method": "bdev_nvme_attach_controller" 00:28:49.300 },{ 00:28:49.300 "params": { 00:28:49.300 "name": "Nvme5", 00:28:49.300 "trtype": "tcp", 00:28:49.300 "traddr": "10.0.0.2", 00:28:49.300 "adrfam": "ipv4", 00:28:49.300 "trsvcid": "4420", 00:28:49.300 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:49.300 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:49.300 "hdgst": false, 00:28:49.300 "ddgst": false 00:28:49.300 }, 00:28:49.300 "method": "bdev_nvme_attach_controller" 00:28:49.300 },{ 00:28:49.300 "params": { 00:28:49.300 "name": "Nvme6", 00:28:49.300 "trtype": "tcp", 00:28:49.300 "traddr": "10.0.0.2", 00:28:49.300 "adrfam": "ipv4", 00:28:49.300 "trsvcid": "4420", 00:28:49.300 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:49.300 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:49.300 "hdgst": false, 00:28:49.300 "ddgst": false 00:28:49.300 }, 00:28:49.300 "method": "bdev_nvme_attach_controller" 00:28:49.300 },{ 00:28:49.300 "params": { 00:28:49.300 "name": "Nvme7", 00:28:49.300 "trtype": "tcp", 00:28:49.300 "traddr": "10.0.0.2", 00:28:49.300 "adrfam": "ipv4", 00:28:49.300 "trsvcid": "4420", 00:28:49.300 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:49.300 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:49.300 "hdgst": false, 00:28:49.300 "ddgst": false 00:28:49.300 }, 00:28:49.300 "method": "bdev_nvme_attach_controller" 00:28:49.300 },{ 00:28:49.300 "params": { 00:28:49.300 "name": "Nvme8", 00:28:49.300 "trtype": "tcp", 00:28:49.300 "traddr": "10.0.0.2", 00:28:49.300 "adrfam": "ipv4", 00:28:49.300 "trsvcid": "4420", 00:28:49.300 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:49.300 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:49.300 "hdgst": false, 00:28:49.300 "ddgst": false 00:28:49.300 }, 00:28:49.300 "method": "bdev_nvme_attach_controller" 00:28:49.300 },{ 00:28:49.300 "params": { 00:28:49.300 "name": "Nvme9", 00:28:49.300 "trtype": "tcp", 00:28:49.300 "traddr": "10.0.0.2", 00:28:49.300 "adrfam": "ipv4", 00:28:49.300 "trsvcid": "4420", 00:28:49.300 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:49.300 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:49.300 "hdgst": false, 00:28:49.300 "ddgst": false 00:28:49.300 }, 00:28:49.300 "method": "bdev_nvme_attach_controller" 00:28:49.300 },{ 00:28:49.300 "params": { 00:28:49.300 "name": "Nvme10", 00:28:49.300 "trtype": "tcp", 00:28:49.300 "traddr": "10.0.0.2", 00:28:49.300 "adrfam": "ipv4", 00:28:49.300 "trsvcid": "4420", 00:28:49.300 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:49.300 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:49.300 "hdgst": false, 00:28:49.300 "ddgst": false 00:28:49.300 }, 00:28:49.300 "method": "bdev_nvme_attach_controller" 00:28:49.300 }' 00:28:49.300 [2024-11-18 08:02:42.191324] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:49.300 [2024-11-18 08:02:42.191412] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:49.300 [2024-11-18 08:02:42.265085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.300 [2024-11-18 08:02:42.311997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.205 08:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:51.205 08:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:51.205 08:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:51.205 08:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.205 08:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:51.205 08:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.205 08:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 816540 00:28:51.205 08:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:51.205 08:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:52.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 816540 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:52.584 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 816364 00:28:52.584 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:52.584 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:52.584 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:52.584 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:52.584 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:52.584 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:52.584 { 00:28:52.584 "params": { 00:28:52.584 "name": "Nvme$subsystem", 00:28:52.584 "trtype": "$TEST_TRANSPORT", 00:28:52.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.584 "adrfam": "ipv4", 00:28:52.584 "trsvcid": "$NVMF_PORT", 00:28:52.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.584 "hdgst": ${hdgst:-false}, 00:28:52.584 "ddgst": ${ddgst:-false} 00:28:52.584 }, 00:28:52.584 "method": "bdev_nvme_attach_controller" 00:28:52.584 } 00:28:52.584 EOF 00:28:52.584 )") 00:28:52.584 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:52.584 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:52.584 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:52.584 { 00:28:52.584 "params": { 00:28:52.584 "name": "Nvme$subsystem", 00:28:52.584 "trtype": "$TEST_TRANSPORT", 00:28:52.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.584 "adrfam": "ipv4", 00:28:52.584 "trsvcid": "$NVMF_PORT", 00:28:52.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.584 "hdgst": ${hdgst:-false}, 00:28:52.584 "ddgst": ${ddgst:-false} 00:28:52.584 }, 00:28:52.584 "method": "bdev_nvme_attach_controller" 00:28:52.584 } 00:28:52.584 EOF 00:28:52.584 )") 00:28:52.584 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:52.584 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:52.584 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:52.584 { 00:28:52.584 "params": { 00:28:52.584 "name": "Nvme$subsystem", 00:28:52.584 "trtype": "$TEST_TRANSPORT", 00:28:52.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.584 "adrfam": "ipv4", 00:28:52.584 "trsvcid": "$NVMF_PORT", 00:28:52.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.584 "hdgst": ${hdgst:-false}, 00:28:52.584 "ddgst": ${ddgst:-false} 00:28:52.584 }, 00:28:52.584 "method": "bdev_nvme_attach_controller" 00:28:52.584 } 00:28:52.584 EOF 00:28:52.584 )") 00:28:52.584 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:52.584 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:52.584 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:52.584 { 00:28:52.584 "params": { 00:28:52.584 "name": "Nvme$subsystem", 00:28:52.584 "trtype": "$TEST_TRANSPORT", 00:28:52.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.584 "adrfam": "ipv4", 00:28:52.584 "trsvcid": "$NVMF_PORT", 00:28:52.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.584 "hdgst": ${hdgst:-false}, 00:28:52.584 "ddgst": ${ddgst:-false} 00:28:52.584 }, 00:28:52.584 "method": "bdev_nvme_attach_controller" 00:28:52.584 } 00:28:52.584 EOF 00:28:52.584 )") 00:28:52.584 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:52.584 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:52.584 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:52.584 { 00:28:52.584 "params": { 00:28:52.584 "name": "Nvme$subsystem", 00:28:52.584 "trtype": "$TEST_TRANSPORT", 00:28:52.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.584 "adrfam": "ipv4", 00:28:52.584 "trsvcid": "$NVMF_PORT", 00:28:52.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.585 "hdgst": ${hdgst:-false}, 00:28:52.585 "ddgst": ${ddgst:-false} 00:28:52.585 }, 00:28:52.585 "method": "bdev_nvme_attach_controller" 00:28:52.585 } 00:28:52.585 EOF 00:28:52.585 )") 00:28:52.585 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:52.585 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:52.585 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:52.585 { 00:28:52.585 "params": { 00:28:52.585 "name": "Nvme$subsystem", 00:28:52.585 "trtype": "$TEST_TRANSPORT", 00:28:52.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.585 "adrfam": "ipv4", 00:28:52.585 "trsvcid": "$NVMF_PORT", 00:28:52.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.585 "hdgst": ${hdgst:-false}, 00:28:52.585 "ddgst": ${ddgst:-false} 00:28:52.585 }, 00:28:52.585 "method": "bdev_nvme_attach_controller" 00:28:52.585 } 00:28:52.585 EOF 00:28:52.585 )") 00:28:52.585 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:52.585 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:52.585 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:52.585 { 00:28:52.585 "params": { 00:28:52.585 "name": "Nvme$subsystem", 00:28:52.585 "trtype": "$TEST_TRANSPORT", 00:28:52.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.585 "adrfam": "ipv4", 00:28:52.585 "trsvcid": "$NVMF_PORT", 00:28:52.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.585 "hdgst": ${hdgst:-false}, 00:28:52.585 "ddgst": ${ddgst:-false} 00:28:52.585 }, 00:28:52.585 "method": "bdev_nvme_attach_controller" 00:28:52.585 } 00:28:52.585 EOF 00:28:52.585 )") 00:28:52.585 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:52.585 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:52.585 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:52.585 { 00:28:52.585 "params": { 00:28:52.585 "name": "Nvme$subsystem", 00:28:52.585 "trtype": "$TEST_TRANSPORT", 00:28:52.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.585 "adrfam": "ipv4", 00:28:52.585 "trsvcid": "$NVMF_PORT", 00:28:52.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.585 "hdgst": ${hdgst:-false}, 00:28:52.585 "ddgst": ${ddgst:-false} 00:28:52.585 }, 00:28:52.585 "method": "bdev_nvme_attach_controller" 00:28:52.585 } 00:28:52.585 EOF 00:28:52.585 )") 00:28:52.585 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:52.585 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:52.585 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:52.585 { 00:28:52.585 "params": { 00:28:52.585 "name": "Nvme$subsystem", 00:28:52.585 "trtype": "$TEST_TRANSPORT", 00:28:52.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.585 "adrfam": "ipv4", 00:28:52.585 "trsvcid": "$NVMF_PORT", 00:28:52.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.585 "hdgst": ${hdgst:-false}, 00:28:52.585 "ddgst": ${ddgst:-false} 00:28:52.585 }, 00:28:52.585 "method": "bdev_nvme_attach_controller" 00:28:52.585 } 00:28:52.585 EOF 00:28:52.585 )") 00:28:52.585 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:52.585 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:52.585 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:52.585 { 00:28:52.585 "params": { 00:28:52.585 "name": "Nvme$subsystem", 00:28:52.585 "trtype": "$TEST_TRANSPORT", 00:28:52.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.585 "adrfam": "ipv4", 00:28:52.585 "trsvcid": "$NVMF_PORT", 00:28:52.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.585 "hdgst": ${hdgst:-false}, 00:28:52.585 "ddgst": ${ddgst:-false} 00:28:52.585 }, 00:28:52.585 "method": "bdev_nvme_attach_controller" 00:28:52.585 } 00:28:52.585 EOF 00:28:52.585 )") 00:28:52.585 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:52.585 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:52.585 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:52.585 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:52.585 "params": { 00:28:52.585 "name": "Nvme1", 00:28:52.585 "trtype": "tcp", 00:28:52.585 "traddr": "10.0.0.2", 00:28:52.585 "adrfam": "ipv4", 00:28:52.585 "trsvcid": "4420", 00:28:52.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:52.585 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:52.585 "hdgst": false, 00:28:52.585 "ddgst": false 00:28:52.585 }, 00:28:52.585 "method": "bdev_nvme_attach_controller" 00:28:52.585 },{ 00:28:52.585 "params": { 00:28:52.585 "name": "Nvme2", 00:28:52.585 "trtype": "tcp", 00:28:52.585 "traddr": "10.0.0.2", 00:28:52.585 "adrfam": "ipv4", 00:28:52.585 "trsvcid": "4420", 00:28:52.585 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:52.585 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:52.585 "hdgst": false, 00:28:52.585 "ddgst": false 00:28:52.585 }, 00:28:52.585 "method": "bdev_nvme_attach_controller" 00:28:52.585 },{ 00:28:52.585 "params": { 00:28:52.585 "name": "Nvme3", 00:28:52.585 "trtype": "tcp", 00:28:52.585 "traddr": "10.0.0.2", 00:28:52.585 "adrfam": "ipv4", 00:28:52.585 "trsvcid": "4420", 00:28:52.585 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:52.585 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:52.585 "hdgst": false, 00:28:52.585 "ddgst": false 00:28:52.585 }, 00:28:52.585 "method": "bdev_nvme_attach_controller" 00:28:52.585 },{ 00:28:52.585 "params": { 00:28:52.585 "name": "Nvme4", 00:28:52.585 "trtype": "tcp", 00:28:52.585 "traddr": "10.0.0.2", 00:28:52.585 "adrfam": "ipv4", 00:28:52.585 "trsvcid": "4420", 00:28:52.585 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:52.585 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:52.585 "hdgst": false, 00:28:52.585 "ddgst": false 00:28:52.585 }, 00:28:52.585 "method": "bdev_nvme_attach_controller" 00:28:52.585 },{ 00:28:52.585 "params": { 00:28:52.585 "name": "Nvme5", 00:28:52.585 "trtype": "tcp", 00:28:52.585 "traddr": "10.0.0.2", 00:28:52.585 "adrfam": "ipv4", 00:28:52.585 "trsvcid": "4420", 00:28:52.585 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:52.585 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:52.585 "hdgst": false, 00:28:52.585 "ddgst": false 00:28:52.585 }, 00:28:52.585 "method": "bdev_nvme_attach_controller" 00:28:52.585 },{ 00:28:52.585 "params": { 00:28:52.585 "name": "Nvme6", 00:28:52.585 "trtype": "tcp", 00:28:52.585 "traddr": "10.0.0.2", 00:28:52.585 "adrfam": "ipv4", 00:28:52.585 "trsvcid": "4420", 00:28:52.585 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:52.585 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:52.585 "hdgst": false, 00:28:52.585 "ddgst": false 00:28:52.585 }, 00:28:52.585 "method": "bdev_nvme_attach_controller" 00:28:52.585 },{ 00:28:52.585 "params": { 00:28:52.585 "name": "Nvme7", 00:28:52.585 "trtype": "tcp", 00:28:52.585 "traddr": "10.0.0.2", 00:28:52.585 "adrfam": "ipv4", 00:28:52.585 "trsvcid": "4420", 00:28:52.585 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:52.585 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:52.585 "hdgst": false, 00:28:52.585 "ddgst": false 00:28:52.585 }, 00:28:52.585 "method": "bdev_nvme_attach_controller" 00:28:52.585 },{ 00:28:52.585 "params": { 00:28:52.585 "name": "Nvme8", 00:28:52.585 "trtype": "tcp", 00:28:52.585 "traddr": "10.0.0.2", 00:28:52.585 "adrfam": "ipv4", 00:28:52.585 "trsvcid": "4420", 00:28:52.585 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:52.585 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:52.585 "hdgst": false, 00:28:52.585 "ddgst": false 00:28:52.585 }, 00:28:52.585 "method": "bdev_nvme_attach_controller" 00:28:52.585 },{ 00:28:52.585 "params": { 00:28:52.585 "name": "Nvme9", 00:28:52.585 "trtype": "tcp", 00:28:52.585 "traddr": "10.0.0.2", 00:28:52.585 "adrfam": "ipv4", 00:28:52.585 "trsvcid": "4420", 00:28:52.585 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:52.585 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:52.585 "hdgst": false, 00:28:52.585 "ddgst": false 00:28:52.585 }, 00:28:52.586 "method": "bdev_nvme_attach_controller" 00:28:52.586 },{ 00:28:52.586 "params": { 00:28:52.586 "name": "Nvme10", 00:28:52.586 "trtype": "tcp", 00:28:52.586 "traddr": "10.0.0.2", 00:28:52.586 "adrfam": "ipv4", 00:28:52.586 "trsvcid": "4420", 00:28:52.586 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:52.586 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:52.586 "hdgst": false, 00:28:52.586 "ddgst": false 00:28:52.586 }, 00:28:52.586 "method": "bdev_nvme_attach_controller" 00:28:52.586 }' 00:28:52.586 [2024-11-18 08:02:45.302884] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:52.586 [2024-11-18 08:02:45.302961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid816847 ] 00:28:52.586 [2024-11-18 08:02:45.377043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.586 [2024-11-18 08:02:45.424899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.965 Running I/O for 1 seconds... 00:28:55.165 1744.00 IOPS, 109.00 MiB/s 00:28:55.165 Latency(us) 00:28:55.165 [2024-11-18T07:02:48.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.165 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:55.165 Verification LBA range: start 0x0 length 0x400 00:28:55.165 Nvme1n1 : 1.13 230.33 14.40 0.00 0.00 273591.08 8398.32 246997.90 00:28:55.165 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:55.165 Verification LBA range: start 0x0 length 0x400 00:28:55.165 Nvme2n1 : 1.14 225.42 14.09 0.00 0.00 276403.58 18155.90 264085.81 00:28:55.165 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:55.165 Verification LBA range: start 0x0 length 0x400 00:28:55.165 Nvme3n1 : 1.15 222.18 13.89 0.00 0.00 276134.49 22330.79 264085.81 00:28:55.165 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:55.165 Verification LBA range: start 0x0 length 0x400 00:28:55.165 Nvme4n1 : 1.12 229.37 14.34 0.00 0.00 262465.80 18058.81 274959.93 00:28:55.165 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:55.165 Verification LBA range: start 0x0 length 0x400 00:28:55.165 Nvme5n1 : 1.15 226.49 14.16 0.00 0.00 260738.43 6359.42 242337.56 00:28:55.165 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:55.165 Verification LBA range: start 0x0 length 0x400 00:28:55.165 Nvme6n1 : 1.15 223.41 13.96 0.00 0.00 260710.40 19903.53 259425.47 00:28:55.165 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:55.165 Verification LBA range: start 0x0 length 0x400 00:28:55.166 Nvme7n1 : 1.14 230.49 14.41 0.00 0.00 245652.05 1990.35 262532.36 00:28:55.166 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:55.166 Verification LBA range: start 0x0 length 0x400 00:28:55.166 Nvme8n1 : 1.16 221.32 13.83 0.00 0.00 254416.59 16117.00 284280.60 00:28:55.166 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:55.166 Verification LBA range: start 0x0 length 0x400 00:28:55.166 Nvme9n1 : 1.18 216.80 13.55 0.00 0.00 255861.57 35535.08 267192.70 00:28:55.166 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:55.166 Verification LBA range: start 0x0 length 0x400 00:28:55.166 Nvme10n1 : 1.20 267.25 16.70 0.00 0.00 204360.02 5170.06 284280.60 00:28:55.166 [2024-11-18T07:02:48.254Z] =================================================================================================================== 00:28:55.166 [2024-11-18T07:02:48.254Z] Total : 2293.06 143.32 0.00 0.00 255767.10 1990.35 284280.60 00:28:55.166 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:55.166 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:55.166 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:55.166 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:55.166 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:55.166 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:55.166 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:55.166 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:55.166 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:55.166 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:55.166 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:55.166 rmmod nvme_tcp 00:28:55.424 rmmod nvme_fabrics 00:28:55.424 rmmod nvme_keyring 00:28:55.424 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:55.424 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:55.424 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:55.424 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 816364 ']' 00:28:55.424 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 816364 00:28:55.424 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 816364 ']' 00:28:55.424 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 816364 00:28:55.424 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:55.424 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:55.424 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 816364 00:28:55.424 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:55.424 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:55.424 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 816364' 00:28:55.424 killing process with pid 816364 00:28:55.424 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 816364 00:28:55.424 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 816364 00:28:55.682 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:55.682 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:55.682 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:55.682 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:55.682 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:55.682 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:55.682 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:55.682 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:55.682 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:55.682 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.682 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.682 08:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:58.256 00:28:58.256 real 0m11.758s 00:28:58.256 user 0m34.146s 00:28:58.256 sys 0m3.258s 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:58.256 ************************************ 00:28:58.256 END TEST nvmf_shutdown_tc1 00:28:58.256 ************************************ 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:58.256 ************************************ 00:28:58.256 START TEST nvmf_shutdown_tc2 00:28:58.256 ************************************ 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:58.256 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:58.257 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:58.257 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:58.257 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:58.257 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:58.257 08:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:58.257 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:58.257 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.257 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:58.257 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:58.257 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:58.257 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:58.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:58.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:28:58.257 00:28:58.257 --- 10.0.0.2 ping statistics --- 00:28:58.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.257 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:28:58.257 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:58.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:58.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:28:58.257 00:28:58.257 --- 10.0.0.1 ping statistics --- 00:28:58.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.257 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:28:58.257 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:58.257 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:58.258 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:58.258 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.258 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:58.258 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:58.258 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.258 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:58.258 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:58.258 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:58.258 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:58.258 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:58.258 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.258 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=817622 00:28:58.258 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 817622 00:28:58.258 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:58.258 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 817622 ']' 00:28:58.258 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.258 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:58.258 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.258 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:58.258 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.258 [2024-11-18 08:02:51.146324] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:58.258 [2024-11-18 08:02:51.146423] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.258 [2024-11-18 08:02:51.222233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:58.258 [2024-11-18 08:02:51.271411] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.258 [2024-11-18 08:02:51.271478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.258 [2024-11-18 08:02:51.271516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.258 [2024-11-18 08:02:51.271528] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.258 [2024-11-18 08:02:51.271538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.258 [2024-11-18 08:02:51.273128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:58.258 [2024-11-18 08:02:51.273191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:58.258 [2024-11-18 08:02:51.273260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:58.258 [2024-11-18 08:02:51.273263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.525 [2024-11-18 08:02:51.426346] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.525 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.525 Malloc1 00:28:58.525 [2024-11-18 08:02:51.521401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.525 Malloc2 00:28:58.525 Malloc3 00:28:58.785 Malloc4 00:28:58.785 Malloc5 00:28:58.785 Malloc6 00:28:58.785 Malloc7 00:28:58.785 Malloc8 00:28:59.044 Malloc9 00:28:59.044 Malloc10 00:28:59.044 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.044 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:59.044 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:59.044 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.044 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=817795 00:28:59.044 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 817795 /var/tmp/bdevperf.sock 00:28:59.044 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 817795 ']' 00:28:59.044 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:59.044 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:59.044 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:59.044 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.044 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:59.044 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:59.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:59.044 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:59.044 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.044 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.044 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.044 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.044 { 00:28:59.044 "params": { 00:28:59.044 "name": "Nvme$subsystem", 00:28:59.045 "trtype": "$TEST_TRANSPORT", 00:28:59.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.045 "adrfam": "ipv4", 00:28:59.045 "trsvcid": "$NVMF_PORT", 00:28:59.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.045 "hdgst": ${hdgst:-false}, 00:28:59.045 "ddgst": ${ddgst:-false} 00:28:59.045 }, 00:28:59.045 "method": "bdev_nvme_attach_controller" 00:28:59.045 } 00:28:59.045 EOF 00:28:59.045 )") 00:28:59.045 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:59.045 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.045 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.045 { 00:28:59.045 "params": { 00:28:59.045 "name": "Nvme$subsystem", 00:28:59.045 "trtype": "$TEST_TRANSPORT", 00:28:59.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.045 "adrfam": "ipv4", 00:28:59.045 "trsvcid": "$NVMF_PORT", 00:28:59.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.045 "hdgst": ${hdgst:-false}, 00:28:59.045 "ddgst": ${ddgst:-false} 00:28:59.045 }, 00:28:59.045 "method": "bdev_nvme_attach_controller" 00:28:59.045 } 00:28:59.045 EOF 00:28:59.045 )") 00:28:59.045 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.045 { 00:28:59.045 "params": { 00:28:59.045 "name": "Nvme$subsystem", 00:28:59.045 "trtype": "$TEST_TRANSPORT", 00:28:59.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.045 "adrfam": "ipv4", 00:28:59.045 "trsvcid": "$NVMF_PORT", 00:28:59.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.045 "hdgst": ${hdgst:-false}, 00:28:59.045 "ddgst": ${ddgst:-false} 00:28:59.045 }, 00:28:59.045 "method": "bdev_nvme_attach_controller" 00:28:59.045 } 00:28:59.045 EOF 00:28:59.045 )") 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.045 { 00:28:59.045 "params": { 00:28:59.045 "name": "Nvme$subsystem", 00:28:59.045 "trtype": "$TEST_TRANSPORT", 00:28:59.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.045 "adrfam": "ipv4", 00:28:59.045 "trsvcid": "$NVMF_PORT", 00:28:59.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.045 "hdgst": ${hdgst:-false}, 00:28:59.045 "ddgst": ${ddgst:-false} 00:28:59.045 }, 00:28:59.045 "method": "bdev_nvme_attach_controller" 00:28:59.045 } 00:28:59.045 EOF 00:28:59.045 )") 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.045 { 00:28:59.045 "params": { 00:28:59.045 "name": "Nvme$subsystem", 00:28:59.045 "trtype": "$TEST_TRANSPORT", 00:28:59.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.045 "adrfam": "ipv4", 00:28:59.045 "trsvcid": "$NVMF_PORT", 00:28:59.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.045 "hdgst": ${hdgst:-false}, 00:28:59.045 "ddgst": ${ddgst:-false} 00:28:59.045 }, 00:28:59.045 "method": "bdev_nvme_attach_controller" 00:28:59.045 } 00:28:59.045 EOF 00:28:59.045 )") 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.045 { 00:28:59.045 "params": { 00:28:59.045 "name": "Nvme$subsystem", 00:28:59.045 "trtype": "$TEST_TRANSPORT", 00:28:59.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.045 "adrfam": "ipv4", 00:28:59.045 "trsvcid": "$NVMF_PORT", 00:28:59.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.045 "hdgst": ${hdgst:-false}, 00:28:59.045 "ddgst": ${ddgst:-false} 00:28:59.045 }, 00:28:59.045 "method": "bdev_nvme_attach_controller" 00:28:59.045 } 00:28:59.045 EOF 00:28:59.045 )") 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.045 { 00:28:59.045 "params": { 00:28:59.045 "name": "Nvme$subsystem", 00:28:59.045 "trtype": "$TEST_TRANSPORT", 00:28:59.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.045 "adrfam": "ipv4", 00:28:59.045 "trsvcid": "$NVMF_PORT", 00:28:59.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.045 "hdgst": ${hdgst:-false}, 00:28:59.045 "ddgst": ${ddgst:-false} 00:28:59.045 }, 00:28:59.045 "method": "bdev_nvme_attach_controller" 00:28:59.045 } 00:28:59.045 EOF 00:28:59.045 )") 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.045 { 00:28:59.045 "params": { 00:28:59.045 "name": "Nvme$subsystem", 00:28:59.045 "trtype": "$TEST_TRANSPORT", 00:28:59.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.045 "adrfam": "ipv4", 00:28:59.045 "trsvcid": "$NVMF_PORT", 00:28:59.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.045 "hdgst": ${hdgst:-false}, 00:28:59.045 "ddgst": ${ddgst:-false} 00:28:59.045 }, 00:28:59.045 "method": "bdev_nvme_attach_controller" 00:28:59.045 } 00:28:59.045 EOF 00:28:59.045 )") 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.045 { 00:28:59.045 "params": { 00:28:59.045 "name": "Nvme$subsystem", 00:28:59.045 "trtype": "$TEST_TRANSPORT", 00:28:59.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.045 "adrfam": "ipv4", 00:28:59.045 "trsvcid": "$NVMF_PORT", 00:28:59.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.045 "hdgst": ${hdgst:-false}, 00:28:59.045 "ddgst": ${ddgst:-false} 00:28:59.045 }, 00:28:59.045 "method": "bdev_nvme_attach_controller" 00:28:59.045 } 00:28:59.045 EOF 00:28:59.045 )") 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.045 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.045 { 00:28:59.045 "params": { 00:28:59.045 "name": "Nvme$subsystem", 00:28:59.046 "trtype": "$TEST_TRANSPORT", 00:28:59.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.046 "adrfam": "ipv4", 00:28:59.046 "trsvcid": "$NVMF_PORT", 00:28:59.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.046 "hdgst": ${hdgst:-false}, 00:28:59.046 "ddgst": ${ddgst:-false} 00:28:59.046 }, 00:28:59.046 "method": "bdev_nvme_attach_controller" 00:28:59.046 } 00:28:59.046 EOF 00:28:59.046 )") 00:28:59.046 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:59.046 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:59.046 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:59.046 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:59.046 "params": { 00:28:59.046 "name": "Nvme1", 00:28:59.046 "trtype": "tcp", 00:28:59.046 "traddr": "10.0.0.2", 00:28:59.046 "adrfam": "ipv4", 00:28:59.046 "trsvcid": "4420", 00:28:59.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:59.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:59.046 "hdgst": false, 00:28:59.046 "ddgst": false 00:28:59.046 }, 00:28:59.046 "method": "bdev_nvme_attach_controller" 00:28:59.046 },{ 00:28:59.046 "params": { 00:28:59.046 "name": "Nvme2", 00:28:59.046 "trtype": "tcp", 00:28:59.046 "traddr": "10.0.0.2", 00:28:59.046 "adrfam": "ipv4", 00:28:59.046 "trsvcid": "4420", 00:28:59.046 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:59.046 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:59.046 "hdgst": false, 00:28:59.046 "ddgst": false 00:28:59.046 }, 00:28:59.046 "method": "bdev_nvme_attach_controller" 00:28:59.046 },{ 00:28:59.046 "params": { 00:28:59.046 "name": "Nvme3", 00:28:59.046 "trtype": "tcp", 00:28:59.046 "traddr": "10.0.0.2", 00:28:59.046 "adrfam": "ipv4", 00:28:59.046 "trsvcid": "4420", 00:28:59.046 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:59.046 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:59.046 "hdgst": false, 00:28:59.046 "ddgst": false 00:28:59.046 }, 00:28:59.046 "method": "bdev_nvme_attach_controller" 00:28:59.046 },{ 00:28:59.046 "params": { 00:28:59.046 "name": "Nvme4", 00:28:59.046 "trtype": "tcp", 00:28:59.046 "traddr": "10.0.0.2", 00:28:59.046 "adrfam": "ipv4", 00:28:59.046 "trsvcid": "4420", 00:28:59.046 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:59.046 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:59.046 "hdgst": false, 00:28:59.046 "ddgst": false 00:28:59.046 }, 00:28:59.046 "method": "bdev_nvme_attach_controller" 00:28:59.046 },{ 00:28:59.046 "params": { 00:28:59.046 "name": "Nvme5", 00:28:59.046 "trtype": "tcp", 00:28:59.046 "traddr": "10.0.0.2", 00:28:59.046 "adrfam": "ipv4", 00:28:59.046 "trsvcid": "4420", 00:28:59.046 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:59.046 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:59.046 "hdgst": false, 00:28:59.046 "ddgst": false 00:28:59.046 }, 00:28:59.046 "method": "bdev_nvme_attach_controller" 00:28:59.046 },{ 00:28:59.046 "params": { 00:28:59.046 "name": "Nvme6", 00:28:59.046 "trtype": "tcp", 00:28:59.046 "traddr": "10.0.0.2", 00:28:59.046 "adrfam": "ipv4", 00:28:59.046 "trsvcid": "4420", 00:28:59.046 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:59.046 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:59.046 "hdgst": false, 00:28:59.046 "ddgst": false 00:28:59.046 }, 00:28:59.046 "method": "bdev_nvme_attach_controller" 00:28:59.046 },{ 00:28:59.046 "params": { 00:28:59.046 "name": "Nvme7", 00:28:59.046 "trtype": "tcp", 00:28:59.046 "traddr": "10.0.0.2", 00:28:59.046 "adrfam": "ipv4", 00:28:59.046 "trsvcid": "4420", 00:28:59.046 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:59.046 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:59.046 "hdgst": false, 00:28:59.046 "ddgst": false 00:28:59.046 }, 00:28:59.046 "method": "bdev_nvme_attach_controller" 00:28:59.046 },{ 00:28:59.046 "params": { 00:28:59.046 "name": "Nvme8", 00:28:59.046 "trtype": "tcp", 00:28:59.046 "traddr": "10.0.0.2", 00:28:59.046 "adrfam": "ipv4", 00:28:59.046 "trsvcid": "4420", 00:28:59.046 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:59.046 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:59.046 "hdgst": false, 00:28:59.046 "ddgst": false 00:28:59.046 }, 00:28:59.046 "method": "bdev_nvme_attach_controller" 00:28:59.046 },{ 00:28:59.046 "params": { 00:28:59.046 "name": "Nvme9", 00:28:59.046 "trtype": "tcp", 00:28:59.046 "traddr": "10.0.0.2", 00:28:59.046 "adrfam": "ipv4", 00:28:59.046 "trsvcid": "4420", 00:28:59.046 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:59.046 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:59.046 "hdgst": false, 00:28:59.046 "ddgst": false 00:28:59.046 }, 00:28:59.046 "method": "bdev_nvme_attach_controller" 00:28:59.046 },{ 00:28:59.046 "params": { 00:28:59.046 "name": "Nvme10", 00:28:59.046 "trtype": "tcp", 00:28:59.046 "traddr": "10.0.0.2", 00:28:59.046 "adrfam": "ipv4", 00:28:59.046 "trsvcid": "4420", 00:28:59.046 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:59.046 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:59.046 "hdgst": false, 00:28:59.046 "ddgst": false 00:28:59.046 }, 00:28:59.046 "method": "bdev_nvme_attach_controller" 00:28:59.046 }' 00:28:59.046 [2024-11-18 08:02:52.043840] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:59.046 [2024-11-18 08:02:52.043914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid817795 ] 00:28:59.046 [2024-11-18 08:02:52.120994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.304 [2024-11-18 08:02:52.168189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.679 Running I/O for 10 seconds... 00:29:01.245 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:01.245 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:01.245 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:01.245 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.245 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:01.245 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.245 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:01.245 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:01.245 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:01.245 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:01.245 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:01.245 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:01.245 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:01.245 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:01.245 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:01.245 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.245 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:01.245 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.245 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:01.245 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:01.245 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:01.504 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:01.504 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:01.504 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:01.504 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:01.504 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.504 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:01.504 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.504 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:01.505 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:01.505 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:01.505 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:01.505 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:01.505 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 817795 00:29:01.505 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 817795 ']' 00:29:01.505 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 817795 00:29:01.505 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:01.505 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:01.505 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 817795 00:29:01.505 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:01.505 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:01.505 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 817795' 00:29:01.505 killing process with pid 817795 00:29:01.505 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 817795 00:29:01.505 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 817795 00:29:01.505 Received shutdown signal, test time was about 0.792092 seconds 00:29:01.505 00:29:01.505 Latency(us) 00:29:01.505 [2024-11-18T07:02:54.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.505 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.505 Verification LBA range: start 0x0 length 0x400 00:29:01.505 Nvme1n1 : 0.76 252.10 15.76 0.00 0.00 249379.65 28932.93 242337.56 00:29:01.505 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.505 Verification LBA range: start 0x0 length 0x400 00:29:01.505 Nvme2n1 : 0.76 257.94 16.12 0.00 0.00 235855.30 7281.78 237677.23 00:29:01.505 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.505 Verification LBA range: start 0x0 length 0x400 00:29:01.505 Nvme3n1 : 0.79 242.65 15.17 0.00 0.00 247948.58 21068.61 257872.02 00:29:01.505 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.505 Verification LBA range: start 0x0 length 0x400 00:29:01.505 Nvme4n1 : 0.77 250.02 15.63 0.00 0.00 233328.39 17379.18 246997.90 00:29:01.505 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.505 Verification LBA range: start 0x0 length 0x400 00:29:01.505 Nvme5n1 : 0.75 171.52 10.72 0.00 0.00 331361.09 39224.51 276513.37 00:29:01.505 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.505 Verification LBA range: start 0x0 length 0x400 00:29:01.505 Nvme6n1 : 0.78 245.69 15.36 0.00 0.00 226444.58 22816.24 271853.04 00:29:01.505 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.505 Verification LBA range: start 0x0 length 0x400 00:29:01.505 Nvme7n1 : 0.78 252.80 15.80 0.00 0.00 213232.00 2524.35 233016.89 00:29:01.505 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.505 Verification LBA range: start 0x0 length 0x400 00:29:01.505 Nvme8n1 : 0.79 243.79 15.24 0.00 0.00 216562.09 22136.60 239230.67 00:29:01.505 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.505 Verification LBA range: start 0x0 length 0x400 00:29:01.505 Nvme9n1 : 0.78 245.40 15.34 0.00 0.00 208056.32 20097.71 248551.35 00:29:01.505 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.505 Verification LBA range: start 0x0 length 0x400 00:29:01.505 Nvme10n1 : 0.76 176.07 11.00 0.00 0.00 277329.79 3155.44 278066.82 00:29:01.505 [2024-11-18T07:02:54.593Z] =================================================================================================================== 00:29:01.505 [2024-11-18T07:02:54.593Z] Total : 2337.97 146.12 0.00 0.00 239671.21 2524.35 278066.82 00:29:01.763 08:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 817622 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:02.701 rmmod nvme_tcp 00:29:02.701 rmmod nvme_fabrics 00:29:02.701 rmmod nvme_keyring 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 817622 ']' 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 817622 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 817622 ']' 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 817622 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:02.701 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 817622 00:29:02.961 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:02.961 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:02.961 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 817622' 00:29:02.961 killing process with pid 817622 00:29:02.961 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 817622 00:29:02.961 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 817622 00:29:03.219 08:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:03.219 08:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:03.219 08:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:03.219 08:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:03.219 08:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:03.219 08:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:03.219 08:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:03.219 08:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:03.219 08:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:03.219 08:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.219 08:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:03.219 08:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:05.757 00:29:05.757 real 0m7.420s 00:29:05.757 user 0m22.058s 00:29:05.757 sys 0m1.451s 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.757 ************************************ 00:29:05.757 END TEST nvmf_shutdown_tc2 00:29:05.757 ************************************ 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:05.757 ************************************ 00:29:05.757 START TEST nvmf_shutdown_tc3 00:29:05.757 ************************************ 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:05.757 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:05.758 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:05.758 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:05.758 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:05.758 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:05.758 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:05.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:05.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:29:05.759 00:29:05.759 --- 10.0.0.2 ping statistics --- 00:29:05.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.759 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:05.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:05.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:29:05.759 00:29:05.759 --- 10.0.0.1 ping statistics --- 00:29:05.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.759 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=818701 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 818701 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 818701 ']' 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:05.759 [2024-11-18 08:02:58.536690] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:05.759 [2024-11-18 08:02:58.536766] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.759 [2024-11-18 08:02:58.611963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:05.759 [2024-11-18 08:02:58.660200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.759 [2024-11-18 08:02:58.660253] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.759 [2024-11-18 08:02:58.660276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:05.759 [2024-11-18 08:02:58.660286] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:05.759 [2024-11-18 08:02:58.660296] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.759 [2024-11-18 08:02:58.661729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:05.759 [2024-11-18 08:02:58.661789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:05.759 [2024-11-18 08:02:58.661854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:05.759 [2024-11-18 08:02:58.661857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:05.759 [2024-11-18 08:02:58.811974] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:05.759 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:05.760 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:05.760 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:05.760 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:05.760 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.760 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:05.760 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.760 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:05.760 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.760 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:05.760 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.760 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:05.760 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.760 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:05.760 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.760 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:06.018 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.018 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:06.018 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.018 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:06.018 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.018 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:06.018 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.018 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:06.018 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:06.018 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.018 08:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:06.018 Malloc1 00:29:06.019 [2024-11-18 08:02:58.917895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.019 Malloc2 00:29:06.019 Malloc3 00:29:06.019 Malloc4 00:29:06.019 Malloc5 00:29:06.278 Malloc6 00:29:06.278 Malloc7 00:29:06.278 Malloc8 00:29:06.278 Malloc9 00:29:06.278 Malloc10 00:29:06.278 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.278 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:06.278 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:06.278 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=818873 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 818873 /var/tmp/bdevperf.sock 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 818873 ']' 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:06.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.538 { 00:29:06.538 "params": { 00:29:06.538 "name": "Nvme$subsystem", 00:29:06.538 "trtype": "$TEST_TRANSPORT", 00:29:06.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.538 "adrfam": "ipv4", 00:29:06.538 "trsvcid": "$NVMF_PORT", 00:29:06.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.538 "hdgst": ${hdgst:-false}, 00:29:06.538 "ddgst": ${ddgst:-false} 00:29:06.538 }, 00:29:06.538 "method": "bdev_nvme_attach_controller" 00:29:06.538 } 00:29:06.538 EOF 00:29:06.538 )") 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.538 { 00:29:06.538 "params": { 00:29:06.538 "name": "Nvme$subsystem", 00:29:06.538 "trtype": "$TEST_TRANSPORT", 00:29:06.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.538 "adrfam": "ipv4", 00:29:06.538 "trsvcid": "$NVMF_PORT", 00:29:06.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.538 "hdgst": ${hdgst:-false}, 00:29:06.538 "ddgst": ${ddgst:-false} 00:29:06.538 }, 00:29:06.538 "method": "bdev_nvme_attach_controller" 00:29:06.538 } 00:29:06.538 EOF 00:29:06.538 )") 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.538 { 00:29:06.538 "params": { 00:29:06.538 "name": "Nvme$subsystem", 00:29:06.538 "trtype": "$TEST_TRANSPORT", 00:29:06.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.538 "adrfam": "ipv4", 00:29:06.538 "trsvcid": "$NVMF_PORT", 00:29:06.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.538 "hdgst": ${hdgst:-false}, 00:29:06.538 "ddgst": ${ddgst:-false} 00:29:06.538 }, 00:29:06.538 "method": "bdev_nvme_attach_controller" 00:29:06.538 } 00:29:06.538 EOF 00:29:06.538 )") 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.538 { 00:29:06.538 "params": { 00:29:06.538 "name": "Nvme$subsystem", 00:29:06.538 "trtype": "$TEST_TRANSPORT", 00:29:06.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.538 "adrfam": "ipv4", 00:29:06.538 "trsvcid": "$NVMF_PORT", 00:29:06.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.538 "hdgst": ${hdgst:-false}, 00:29:06.538 "ddgst": ${ddgst:-false} 00:29:06.538 }, 00:29:06.538 "method": "bdev_nvme_attach_controller" 00:29:06.538 } 00:29:06.538 EOF 00:29:06.538 )") 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.538 { 00:29:06.538 "params": { 00:29:06.538 "name": "Nvme$subsystem", 00:29:06.538 "trtype": "$TEST_TRANSPORT", 00:29:06.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.538 "adrfam": "ipv4", 00:29:06.538 "trsvcid": "$NVMF_PORT", 00:29:06.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.538 "hdgst": ${hdgst:-false}, 00:29:06.538 "ddgst": ${ddgst:-false} 00:29:06.538 }, 00:29:06.538 "method": "bdev_nvme_attach_controller" 00:29:06.538 } 00:29:06.538 EOF 00:29:06.538 )") 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.538 { 00:29:06.538 "params": { 00:29:06.538 "name": "Nvme$subsystem", 00:29:06.538 "trtype": "$TEST_TRANSPORT", 00:29:06.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.538 "adrfam": "ipv4", 00:29:06.538 "trsvcid": "$NVMF_PORT", 00:29:06.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.538 "hdgst": ${hdgst:-false}, 00:29:06.538 "ddgst": ${ddgst:-false} 00:29:06.538 }, 00:29:06.538 "method": "bdev_nvme_attach_controller" 00:29:06.538 } 00:29:06.538 EOF 00:29:06.538 )") 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.538 { 00:29:06.538 "params": { 00:29:06.538 "name": "Nvme$subsystem", 00:29:06.538 "trtype": "$TEST_TRANSPORT", 00:29:06.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.538 "adrfam": "ipv4", 00:29:06.538 "trsvcid": "$NVMF_PORT", 00:29:06.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.538 "hdgst": ${hdgst:-false}, 00:29:06.538 "ddgst": ${ddgst:-false} 00:29:06.538 }, 00:29:06.538 "method": "bdev_nvme_attach_controller" 00:29:06.538 } 00:29:06.538 EOF 00:29:06.538 )") 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.538 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.538 { 00:29:06.538 "params": { 00:29:06.538 "name": "Nvme$subsystem", 00:29:06.538 "trtype": "$TEST_TRANSPORT", 00:29:06.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.538 "adrfam": "ipv4", 00:29:06.538 "trsvcid": "$NVMF_PORT", 00:29:06.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.539 "hdgst": ${hdgst:-false}, 00:29:06.539 "ddgst": ${ddgst:-false} 00:29:06.539 }, 00:29:06.539 "method": "bdev_nvme_attach_controller" 00:29:06.539 } 00:29:06.539 EOF 00:29:06.539 )") 00:29:06.539 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:06.539 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.539 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.539 { 00:29:06.539 "params": { 00:29:06.539 "name": "Nvme$subsystem", 00:29:06.539 "trtype": "$TEST_TRANSPORT", 00:29:06.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.539 "adrfam": "ipv4", 00:29:06.539 "trsvcid": "$NVMF_PORT", 00:29:06.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.539 "hdgst": ${hdgst:-false}, 00:29:06.539 "ddgst": ${ddgst:-false} 00:29:06.539 }, 00:29:06.539 "method": "bdev_nvme_attach_controller" 00:29:06.539 } 00:29:06.539 EOF 00:29:06.539 )") 00:29:06.539 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:06.539 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.539 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.539 { 00:29:06.539 "params": { 00:29:06.539 "name": "Nvme$subsystem", 00:29:06.539 "trtype": "$TEST_TRANSPORT", 00:29:06.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.539 "adrfam": "ipv4", 00:29:06.539 "trsvcid": "$NVMF_PORT", 00:29:06.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.539 "hdgst": ${hdgst:-false}, 00:29:06.539 "ddgst": ${ddgst:-false} 00:29:06.539 }, 00:29:06.539 "method": "bdev_nvme_attach_controller" 00:29:06.539 } 00:29:06.539 EOF 00:29:06.539 )") 00:29:06.539 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:06.539 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:06.539 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:06.539 08:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:06.539 "params": { 00:29:06.539 "name": "Nvme1", 00:29:06.539 "trtype": "tcp", 00:29:06.539 "traddr": "10.0.0.2", 00:29:06.539 "adrfam": "ipv4", 00:29:06.539 "trsvcid": "4420", 00:29:06.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:06.539 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:06.539 "hdgst": false, 00:29:06.539 "ddgst": false 00:29:06.539 }, 00:29:06.539 "method": "bdev_nvme_attach_controller" 00:29:06.539 },{ 00:29:06.539 "params": { 00:29:06.539 "name": "Nvme2", 00:29:06.539 "trtype": "tcp", 00:29:06.539 "traddr": "10.0.0.2", 00:29:06.539 "adrfam": "ipv4", 00:29:06.539 "trsvcid": "4420", 00:29:06.539 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:06.539 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:06.539 "hdgst": false, 00:29:06.539 "ddgst": false 00:29:06.539 }, 00:29:06.539 "method": "bdev_nvme_attach_controller" 00:29:06.539 },{ 00:29:06.539 "params": { 00:29:06.539 "name": "Nvme3", 00:29:06.539 "trtype": "tcp", 00:29:06.539 "traddr": "10.0.0.2", 00:29:06.539 "adrfam": "ipv4", 00:29:06.539 "trsvcid": "4420", 00:29:06.539 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:06.539 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:06.539 "hdgst": false, 00:29:06.539 "ddgst": false 00:29:06.539 }, 00:29:06.539 "method": "bdev_nvme_attach_controller" 00:29:06.539 },{ 00:29:06.539 "params": { 00:29:06.539 "name": "Nvme4", 00:29:06.539 "trtype": "tcp", 00:29:06.539 "traddr": "10.0.0.2", 00:29:06.539 "adrfam": "ipv4", 00:29:06.539 "trsvcid": "4420", 00:29:06.539 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:06.539 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:06.539 "hdgst": false, 00:29:06.539 "ddgst": false 00:29:06.539 }, 00:29:06.539 "method": "bdev_nvme_attach_controller" 00:29:06.539 },{ 00:29:06.539 "params": { 00:29:06.539 "name": "Nvme5", 00:29:06.539 "trtype": "tcp", 00:29:06.539 "traddr": "10.0.0.2", 00:29:06.539 "adrfam": "ipv4", 00:29:06.539 "trsvcid": "4420", 00:29:06.539 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:06.539 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:06.539 "hdgst": false, 00:29:06.539 "ddgst": false 00:29:06.539 }, 00:29:06.539 "method": "bdev_nvme_attach_controller" 00:29:06.539 },{ 00:29:06.539 "params": { 00:29:06.539 "name": "Nvme6", 00:29:06.539 "trtype": "tcp", 00:29:06.539 "traddr": "10.0.0.2", 00:29:06.539 "adrfam": "ipv4", 00:29:06.539 "trsvcid": "4420", 00:29:06.539 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:06.539 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:06.539 "hdgst": false, 00:29:06.539 "ddgst": false 00:29:06.539 }, 00:29:06.539 "method": "bdev_nvme_attach_controller" 00:29:06.539 },{ 00:29:06.539 "params": { 00:29:06.539 "name": "Nvme7", 00:29:06.539 "trtype": "tcp", 00:29:06.539 "traddr": "10.0.0.2", 00:29:06.539 "adrfam": "ipv4", 00:29:06.539 "trsvcid": "4420", 00:29:06.539 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:06.539 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:06.539 "hdgst": false, 00:29:06.539 "ddgst": false 00:29:06.539 }, 00:29:06.539 "method": "bdev_nvme_attach_controller" 00:29:06.539 },{ 00:29:06.539 "params": { 00:29:06.539 "name": "Nvme8", 00:29:06.539 "trtype": "tcp", 00:29:06.539 "traddr": "10.0.0.2", 00:29:06.539 "adrfam": "ipv4", 00:29:06.539 "trsvcid": "4420", 00:29:06.539 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:06.539 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:06.539 "hdgst": false, 00:29:06.539 "ddgst": false 00:29:06.539 }, 00:29:06.539 "method": "bdev_nvme_attach_controller" 00:29:06.539 },{ 00:29:06.539 "params": { 00:29:06.539 "name": "Nvme9", 00:29:06.539 "trtype": "tcp", 00:29:06.539 "traddr": "10.0.0.2", 00:29:06.539 "adrfam": "ipv4", 00:29:06.539 "trsvcid": "4420", 00:29:06.539 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:06.539 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:06.539 "hdgst": false, 00:29:06.539 "ddgst": false 00:29:06.539 }, 00:29:06.539 "method": "bdev_nvme_attach_controller" 00:29:06.539 },{ 00:29:06.539 "params": { 00:29:06.539 "name": "Nvme10", 00:29:06.539 "trtype": "tcp", 00:29:06.539 "traddr": "10.0.0.2", 00:29:06.539 "adrfam": "ipv4", 00:29:06.539 "trsvcid": "4420", 00:29:06.539 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:06.539 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:06.539 "hdgst": false, 00:29:06.539 "ddgst": false 00:29:06.539 }, 00:29:06.539 "method": "bdev_nvme_attach_controller" 00:29:06.539 }' 00:29:06.539 [2024-11-18 08:02:59.440642] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:06.539 [2024-11-18 08:02:59.440721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid818873 ] 00:29:06.539 [2024-11-18 08:02:59.517389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.539 [2024-11-18 08:02:59.564691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.440 Running I/O for 10 seconds... 00:29:08.440 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.440 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:08.440 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:08.440 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.440 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:08.440 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.440 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:08.440 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:08.440 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:08.440 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:08.440 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:08.440 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:08.440 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:08.440 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:08.440 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:08.440 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:08.440 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.440 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:08.440 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.698 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:29:08.698 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:29:08.698 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:08.957 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:08.957 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:08.957 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:08.957 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:08.957 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.957 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:08.957 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.957 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:08.957 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:08.957 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 818701 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 818701 ']' 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 818701 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 818701 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 818701' 00:29:09.230 killing process with pid 818701 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 818701 00:29:09.230 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 818701 00:29:09.230 [2024-11-18 08:03:02.160479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e2b0 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.160563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e2b0 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.160578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e2b0 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.160591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e2b0 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.161706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.161731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.161745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.161769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.161793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.161805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.161817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.161830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.161842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.161854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.161866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.161878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.161891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.161903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.161915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.161927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.161938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.161951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.161963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.161976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.161989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.230 [2024-11-18 08:03:02.162283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.162296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.162309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.162321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.162333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.162345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.162358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.162370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.162382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.162395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.162407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.162419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.162431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.162444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.162456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.162472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.162486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.162509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.162523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.162535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b540 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.164995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.165009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.165021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.165034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.165046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.165059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.165072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.165093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.165107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.165119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.165131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.165144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.165156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.165168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ba10 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.166500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.231 [2024-11-18 08:03:02.166536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.166992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.167313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232bf00 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.232 [2024-11-18 08:03:02.169395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.169925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232c8a0 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.233 [2024-11-18 08:03:02.171853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.171866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.171878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.171891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.171903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.171916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.171929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.171945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.171959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.171972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.171986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.172003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.172032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.172044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.172055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.172067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.172079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.172091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.172102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.172114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd70 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.234 [2024-11-18 08:03:02.173736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.173749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.173761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.173793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.173805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.173817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.173829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.173841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.173858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.173870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.173882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.173895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.173906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.173918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d260 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.174993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.235 [2024-11-18 08:03:02.175286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.236 [2024-11-18 08:03:02.175298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dde0 is same with the state(6) to be set 00:29:09.236 [2024-11-18 08:03:02.176611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.176655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.176681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.176696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.176711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.176725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.176741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.176756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.176776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502990 is same with the state(6) to be set 00:29:09.236 [2024-11-18 08:03:02.176861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.176882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.176897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.176912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.176926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.176939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.176954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.176967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.176980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529870 is same with the state(6) to be set 00:29:09.236 [2024-11-18 08:03:02.177030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.177050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.177066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.177080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.177095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.177109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.177123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.177137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.177150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15316a0 is same with the state(6) to be set 00:29:09.236 [2024-11-18 08:03:02.177200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.177221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.177236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.177251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.177265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.177278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.177297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.177312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.177326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1528e50 is same with the state(6) to be set 00:29:09.236 [2024-11-18 08:03:02.177375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.177412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.177432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.177447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.177462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.177483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.177507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.177522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.177536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529030 is same with the state(6) to be set 00:29:09.236 [2024-11-18 08:03:02.177583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.177604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.177620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.177634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.177649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.177663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.177678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.177691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.177704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe2610 is same with the state(6) to be set 00:29:09.236 [2024-11-18 08:03:02.177753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.177773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.177807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.177826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.177842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.177862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.177877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.177892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.177904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d71c0 is same with the state(6) to be set 00:29:09.236 [2024-11-18 08:03:02.177954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.177975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.177990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.178005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.178020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.178034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.178059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.178073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.236 [2024-11-18 08:03:02.178086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1501610 is same with the state(6) to be set 00:29:09.236 [2024-11-18 08:03:02.178131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.236 [2024-11-18 08:03:02.178152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.178167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.237 [2024-11-18 08:03:02.178182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.178196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.237 [2024-11-18 08:03:02.178212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.178227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.237 [2024-11-18 08:03:02.178241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.178254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d4ed0 is same with the state(6) to be set 00:29:09.237 [2024-11-18 08:03:02.178299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.237 [2024-11-18 08:03:02.178320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.178336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.237 [2024-11-18 08:03:02.178355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.178369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.237 [2024-11-18 08:03:02.178383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.178397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.237 [2024-11-18 08:03:02.178411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.178424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d7620 is same with the state(6) to be set 00:29:09.237 [2024-11-18 08:03:02.179399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.179427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.179453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.179470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.179498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.179517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.179533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.179548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.179564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.179579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.179595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.179610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.179626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.179640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.179656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.179671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.179687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.179701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.179717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.179732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.179761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.179778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.179799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.179813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.179829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.179844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.179860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.179875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.179891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.179906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.179923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.179938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.179954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.179969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.179985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.180000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.180016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.180031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.180047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.180062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.180078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.180092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.180108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.180122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.180139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.180157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.180175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.180189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.180205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.180220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.180236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.180251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.180267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.180282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.180297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.180312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.180329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.180344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.180360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.180374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.237 [2024-11-18 08:03:02.180390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.237 [2024-11-18 08:03:02.180405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.180422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.180436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.180452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.180467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.180487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.180510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.180527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.180542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.180563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.180580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.180597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.180612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.180628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.180642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.180658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.180673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.180688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.180703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.180719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.180734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.180750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.180764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.180791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.180805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.180821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.180836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.180852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.180866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.180883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.180898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.180914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.180929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.180944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.180963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.180980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.180995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.181011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.181026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.181042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.181056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.181072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.181087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.181103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.181117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.181133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.181148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.181165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.181179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.181195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.181209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.181225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.181240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.181256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.181270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.181286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.181301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.181317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.181331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.181351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.181366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.181382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.181397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.181413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.181428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.181444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.181459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.181520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.238 [2024-11-18 08:03:02.182192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.182217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.182239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.182256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.182272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.182287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.238 [2024-11-18 08:03:02.182304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.238 [2024-11-18 08:03:02.182319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.182336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.182351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.182367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.182382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.182398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.182413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.182429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.182448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.182467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.182486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.182513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.182528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.182545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.182560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.182576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.182591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.182607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.182622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.182638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.182653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.182669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.182684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.182701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.182715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.182731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.182746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.182763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.182783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.182799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.182814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.182831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.182847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.182863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.182882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.182899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.182914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.182931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.182946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.182962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.182978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.182995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.183010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.183026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.183041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.183057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.183072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.183088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.183103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.183119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.183134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.183150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.183165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.183182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.183197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.183213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.183228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.183244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.183259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.183279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.183294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.183310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.183325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.183343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.183358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.183375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.239 [2024-11-18 08:03:02.183390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.239 [2024-11-18 08:03:02.183406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.183421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.183437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.183452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.183469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.183484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.183508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.183524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.183540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.183555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.183571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.183586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.183604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.183619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.183635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.183650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.183667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.183685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.183702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.183717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.183733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.183748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.183764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.183780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.183796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.183810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.183827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.183842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.183858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.183873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.183889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.183904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.183920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.183935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.183950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.183965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.183981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.183996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.184013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.184028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.184044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.184058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.184078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.184094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.184110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.184125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.184141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.184155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.184172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.184186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.184202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.184217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.184233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.184248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.186134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.186161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.186184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.186201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.186218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.186234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.186251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.186266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.186282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.186297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.186314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.186329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.186345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.186367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.186383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.186398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.186414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.186429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.186446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.186461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.186496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.240 [2024-11-18 08:03:02.186513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.240 [2024-11-18 08:03:02.186530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.186545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.186561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.186576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.186592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.186607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.186623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.186638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.186655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.186669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.186685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.186699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.186717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.186733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.186749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.186764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.186791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.186807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.186824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.186839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.186855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.186870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.186886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.186901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.186917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.186932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.186948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.186962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.186979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.186993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.187009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.187024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.187041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.187056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.187072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.187087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.187103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.187118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.187134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.187149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.187165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.187183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.187200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.187215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.187232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.187247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.187263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.187278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.187295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.187310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.187326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.187340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.187357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.187371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.187388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.187403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.187419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.187434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.187449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.187464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.187485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.187507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.187524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.187538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.241 [2024-11-18 08:03:02.187555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-11-18 08:03:02.187569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.187593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.187609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.187625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.187640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.187656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.187671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.187687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.187701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.187717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.187732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.187748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.187762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.187787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.187802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.187819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.187833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.187850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.187864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.187881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.187896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.187913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.187928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.187944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.187959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.187975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.187994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.188010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.188025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.188041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.188056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.188072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.188086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.188102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.188117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.188133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.188148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.188164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.188178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.188195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.188209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.188328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:09.242 [2024-11-18 08:03:02.188387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1502990 (9): Bad file descriptor 00:29:09.242 [2024-11-18 08:03:02.188447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1529870 (9): Bad file descriptor 00:29:09.242 [2024-11-18 08:03:02.188503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15316a0 (9): Bad file descriptor 00:29:09.242 [2024-11-18 08:03:02.188534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1528e50 (9): Bad file descriptor 00:29:09.242 [2024-11-18 08:03:02.188567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1529030 (9): Bad file descriptor 00:29:09.242 [2024-11-18 08:03:02.188599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe2610 (9): Bad file descriptor 00:29:09.242 [2024-11-18 08:03:02.188632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d71c0 (9): Bad file descriptor 00:29:09.242 [2024-11-18 08:03:02.188662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1501610 (9): Bad file descriptor 00:29:09.242 [2024-11-18 08:03:02.188699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d4ed0 (9): Bad file descriptor 00:29:09.242 [2024-11-18 08:03:02.188730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d7620 (9): Bad file descriptor 00:29:09.242 [2024-11-18 08:03:02.191570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:09.242 [2024-11-18 08:03:02.192539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:09.242 [2024-11-18 08:03:02.192708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-11-18 08:03:02.192741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1502990 with addr=10.0.0.2, port=4420 00:29:09.242 [2024-11-18 08:03:02.192760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502990 is same with the state(6) to be set 00:29:09.242 [2024-11-18 08:03:02.192858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-11-18 08:03:02.192886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d7620 with addr=10.0.0.2, port=4420 00:29:09.242 [2024-11-18 08:03:02.192902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d7620 is same with the state(6) to be set 00:29:09.242 [2024-11-18 08:03:02.193254] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:09.242 [2024-11-18 08:03:02.193343] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:09.242 [2024-11-18 08:03:02.193410] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:09.242 [2024-11-18 08:03:02.193513] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:09.242 [2024-11-18 08:03:02.193587] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:09.242 [2024-11-18 08:03:02.193652] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:09.242 [2024-11-18 08:03:02.193718] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:09.242 [2024-11-18 08:03:02.194088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-11-18 08:03:02.194117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15316a0 with addr=10.0.0.2, port=4420 00:29:09.242 [2024-11-18 08:03:02.194135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15316a0 is same with the state(6) to be set 00:29:09.242 [2024-11-18 08:03:02.194156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1502990 (9): Bad file descriptor 00:29:09.242 [2024-11-18 08:03:02.194179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d7620 (9): Bad file descriptor 00:29:09.242 [2024-11-18 08:03:02.194313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.194339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.194374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.194391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.194409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.194425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.194442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.194457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.194484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.194518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.242 [2024-11-18 08:03:02.194537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-11-18 08:03:02.194552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.194568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.194583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.194600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.194615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.194631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.194646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.194662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.194687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.194704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.194719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.194735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.194751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.194767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.194782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.194799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.194814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.194831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.194847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.194863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.194878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.194894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.194909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.194926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.194944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.194962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.194977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.194993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.195008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.195024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.195041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.195057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.195072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.195089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.195103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.195120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.195136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.195152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.195167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.195183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.195198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.195215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.195230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.195246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.195261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.195277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.195293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.195309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.195324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.195344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.195359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.195376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.195392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.195408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.195422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.195439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.195454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.195481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.195503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.195521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.243 [2024-11-18 08:03:02.195535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.243 [2024-11-18 08:03:02.195552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.195567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.195583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.195599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.195615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.195629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.195645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.195660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.195676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.195691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.195706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.195721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.195738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.195756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.195782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.195797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.195813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.195828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.195844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.195859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.195876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.195891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.195907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.195922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.195938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.195952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.195969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.195984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.196000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.196015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.196031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.196046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.196062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.196077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.196093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.196108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.196124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.196139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.196159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.196174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.196191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.196206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.196222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.196237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.196254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.196268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.196284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.196298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.196315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.196330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.196346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.196361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.196377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.196391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.196408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.244 [2024-11-18 08:03:02.196423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.244 [2024-11-18 08:03:02.196438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dd5a0 is same with the state(6) to be set 00:29:09.244 [2024-11-18 08:03:02.196610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15316a0 (9): Bad file descriptor 00:29:09.244 [2024-11-18 08:03:02.196637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:09.244 [2024-11-18 08:03:02.196652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:09.244 [2024-11-18 08:03:02.196669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:09.244 [2024-11-18 08:03:02.196685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:09.244 [2024-11-18 08:03:02.196703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:09.244 [2024-11-18 08:03:02.196716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:09.244 [2024-11-18 08:03:02.196733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:09.244 [2024-11-18 08:03:02.196748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:09.244 [2024-11-18 08:03:02.197975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:09.244 [2024-11-18 08:03:02.198018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:09.244 [2024-11-18 08:03:02.198035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:09.245 [2024-11-18 08:03:02.198050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:09.245 [2024-11-18 08:03:02.198063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:09.245 [2024-11-18 08:03:02.198206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-11-18 08:03:02.198235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1529030 with addr=10.0.0.2, port=4420 00:29:09.245 [2024-11-18 08:03:02.198253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529030 is same with the state(6) to be set 00:29:09.245 [2024-11-18 08:03:02.198574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1529030 (9): Bad file descriptor 00:29:09.245 [2024-11-18 08:03:02.198720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:09.245 [2024-11-18 08:03:02.198743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:09.245 [2024-11-18 08:03:02.198758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:09.245 [2024-11-18 08:03:02.198772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:09.245 [2024-11-18 08:03:02.198825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.198845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.198867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.198883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.198901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.198916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.198933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.198948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.198964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.198980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.198996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.245 [2024-11-18 08:03:02.199867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.245 [2024-11-18 08:03:02.199882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.199899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.199914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.199930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.199946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.199963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.199978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.199995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.246 [2024-11-18 08:03:02.200861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.246 [2024-11-18 08:03:02.200876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.200891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dcaf0 is same with the state(6) to be set 00:29:09.247 [2024-11-18 08:03:02.202148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.202973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.202988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.203004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.203019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.203036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.203051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.203068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.203083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.203103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.203119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.203136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.203151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.203167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.203182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.203199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.203215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.203231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.203246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.203263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.203278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.203294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.203309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.203326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.203341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.203357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.203372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.247 [2024-11-18 08:03:02.203388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.247 [2024-11-18 08:03:02.203403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.203419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.203435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.203451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.203466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.203482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.203507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.203527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.203542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.203559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.203573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.203590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.203604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.203621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.203636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.203652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.203667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.203683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.203698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.203714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.203729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.203745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.203759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.203775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.203790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.203806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.203821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.203837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.203852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.203869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.203884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.203904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.203919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.203937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.203952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.203969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.203984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.204000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.204015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.204032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.204046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.204062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.204077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.204094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.204108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.204124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.204139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.204156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.204171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.204187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.204202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.204217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cacc0 is same with the state(6) to be set 00:29:09.248 [2024-11-18 08:03:02.205459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.205483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.205516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.205533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.205551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.205570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.205587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.205602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.205618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.205633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.205649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.205664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.205680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.205695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.205712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.205727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.205743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.205758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.205775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.205789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.205806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.205821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.205837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.205852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.248 [2024-11-18 08:03:02.205868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.248 [2024-11-18 08:03:02.205883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.205900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.205915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.205931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.205946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.205966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.205982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.205999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.249 [2024-11-18 08:03:02.206941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.249 [2024-11-18 08:03:02.206958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.206972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.206989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.207003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.207020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.207035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.207050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8020 is same with the state(6) to be set 00:29:09.250 [2024-11-18 08:03:02.208213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.208976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.208991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.209007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.209022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.209038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.209053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.209069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.209084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.209100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.209115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.209136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.209152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.209168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.209183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.209200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.209215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.209231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.209246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.209262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.209277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.209294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.209309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.209325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.209340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.209356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.250 [2024-11-18 08:03:02.209371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.250 [2024-11-18 08:03:02.209388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.209403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.209419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.209434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.209451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.209465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.209483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.209505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.209523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.209542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.209559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.209574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.209590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.209605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.209622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.209637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.209654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.209668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.209684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.209699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.209716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.209731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.209748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.209763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.209780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.209795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.209811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.209826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.209842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.209857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.209873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.209887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.209904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.209918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.209938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.209954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.209970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.209985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.210001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.210016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.210033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.210048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.210065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.210079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.210095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.210110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.210127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.210142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.210158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.210173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.210189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.210204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.210220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.210235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.210251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.210266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.210281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14daa80 is same with the state(6) to be set 00:29:09.251 [2024-11-18 08:03:02.211538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.211562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.211588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.211605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.211623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.211637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.211654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.211668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.211685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.211700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.211716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.211732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.211748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.211763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.211779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.211794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.211810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.211825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.211841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.211857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.251 [2024-11-18 08:03:02.211873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.251 [2024-11-18 08:03:02.211888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.211905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.211920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.211937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.211951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.211968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.211986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.212971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.212987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.213002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.213019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.213033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.213050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.213064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.213080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.213095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.213111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.213126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.213142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.252 [2024-11-18 08:03:02.213157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.252 [2024-11-18 08:03:02.213173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.213188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.213208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.213224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.213241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.213256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.213272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.213287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.213303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.213318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.213334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.213349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.213365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.213380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.213397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.213412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.213428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.213443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.213459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.213473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.213496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.213513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.213529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.213544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.213560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.213575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.213591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.213610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.213625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc010 is same with the state(6) to be set 00:29:09.253 [2024-11-18 08:03:02.214876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.214901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.214922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.214939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.214956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.214971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.214988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.215003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.215019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.215034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.215050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.215065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.215082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.215096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.215113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.215127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.215144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.215159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.215176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.215191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.215207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.215221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.215238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.215253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.215274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.215290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.215306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.215321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.215337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.215352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.215368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.215382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.215399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.215415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.215431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.215446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.215462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.215477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.215501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.215518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.253 [2024-11-18 08:03:02.215535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.253 [2024-11-18 08:03:02.215549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.215566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.215581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.215597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.215612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.215628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.215643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.215660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.215679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.215696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.215711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.215727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.215741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.215758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.215773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.215789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.215804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.215821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.215836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.215852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.215867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.215883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.215898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.215915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.215929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.215946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.215962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.215978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.215993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.216010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.216024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.216041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.216055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.216075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.216091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.216106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.216121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.216138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.216152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.216168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.216183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.216199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.216214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.216230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.216245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.216261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.216275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.216291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.216306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.216323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.216338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.216354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.216369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.216385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.216400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.216416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.216430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.216446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.216464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.216481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.216504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.216521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.216537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.216554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.216569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.216586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.216600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.216616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.254 [2024-11-18 08:03:02.216631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.254 [2024-11-18 08:03:02.216645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14deae0 is same with the state(6) to be set 00:29:09.254 [2024-11-18 08:03:02.218861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:09.254 [2024-11-18 08:03:02.218910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:09.254 [2024-11-18 08:03:02.218931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:09.254 [2024-11-18 08:03:02.218951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:09.254 [2024-11-18 08:03:02.219083] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:09.254 [2024-11-18 08:03:02.219112] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:09.254 [2024-11-18 08:03:02.219226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:09.254 task offset: 17408 on job bdev=Nvme5n1 fails 00:29:09.254 00:29:09.254 Latency(us) 00:29:09.254 [2024-11-18T07:03:02.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.254 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.254 Job: Nvme1n1 ended in about 0.91 seconds with error 00:29:09.254 Verification LBA range: start 0x0 length 0x400 00:29:09.255 Nvme1n1 : 0.91 139.96 8.75 69.98 0.00 301367.37 10145.94 327777.09 00:29:09.255 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.255 Job: Nvme2n1 ended in about 0.93 seconds with error 00:29:09.255 Verification LBA range: start 0x0 length 0x400 00:29:09.255 Nvme2n1 : 0.93 138.13 8.63 69.06 0.00 299259.83 20000.62 321563.31 00:29:09.255 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.255 Job: Nvme3n1 ended in about 0.93 seconds with error 00:29:09.255 Verification LBA range: start 0x0 length 0x400 00:29:09.255 Nvme3n1 : 0.93 137.64 8.60 68.82 0.00 294183.82 36505.98 307582.29 00:29:09.255 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.255 Job: Nvme4n1 ended in about 0.93 seconds with error 00:29:09.255 Verification LBA range: start 0x0 length 0x400 00:29:09.255 Nvme4n1 : 0.93 158.67 9.92 52.53 0.00 280506.40 41748.86 302921.96 00:29:09.255 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.255 Job: Nvme5n1 ended in about 0.91 seconds with error 00:29:09.255 Verification LBA range: start 0x0 length 0x400 00:29:09.255 Nvme5n1 : 0.91 140.61 8.79 70.30 0.00 275295.00 5485.61 301368.51 00:29:09.255 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.255 Job: Nvme6n1 ended in about 0.94 seconds with error 00:29:09.255 Verification LBA range: start 0x0 length 0x400 00:29:09.255 Nvme6n1 : 0.94 136.75 8.55 68.37 0.00 277650.14 24758.04 323116.75 00:29:09.255 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.255 Job: Nvme7n1 ended in about 0.94 seconds with error 00:29:09.255 Verification LBA range: start 0x0 length 0x400 00:29:09.255 Nvme7n1 : 0.94 136.26 8.52 68.13 0.00 272473.51 25437.68 315349.52 00:29:09.255 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.255 Job: Nvme8n1 ended in about 0.92 seconds with error 00:29:09.255 Verification LBA range: start 0x0 length 0x400 00:29:09.255 Nvme8n1 : 0.92 138.74 8.67 69.37 0.00 260632.02 11165.39 299815.06 00:29:09.255 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.255 Job: Nvme9n1 ended in about 0.94 seconds with error 00:29:09.255 Verification LBA range: start 0x0 length 0x400 00:29:09.255 Nvme9n1 : 0.94 77.47 4.84 58.37 0.00 389734.40 26408.58 349525.33 00:29:09.255 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.255 Job: Nvme10n1 ended in about 0.92 seconds with error 00:29:09.255 Verification LBA range: start 0x0 length 0x400 00:29:09.255 Nvme10n1 : 0.92 139.70 8.73 69.85 0.00 246531.41 14175.19 321563.31 00:29:09.255 [2024-11-18T07:03:02.343Z] =================================================================================================================== 00:29:09.255 [2024-11-18T07:03:02.343Z] Total : 1343.93 84.00 664.79 0.00 286300.51 5485.61 349525.33 00:29:09.255 [2024-11-18 08:03:02.247552] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:09.255 [2024-11-18 08:03:02.247648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:09.255 [2024-11-18 08:03:02.247947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.255 [2024-11-18 08:03:02.247984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d71c0 with addr=10.0.0.2, port=4420 00:29:09.255 [2024-11-18 08:03:02.248006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d71c0 is same with the state(6) to be set 00:29:09.255 [2024-11-18 08:03:02.248096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.255 [2024-11-18 08:03:02.248124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d4ed0 with addr=10.0.0.2, port=4420 00:29:09.255 [2024-11-18 08:03:02.248141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d4ed0 is same with the state(6) to be set 00:29:09.255 [2024-11-18 08:03:02.248233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.255 [2024-11-18 08:03:02.248260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1501610 with addr=10.0.0.2, port=4420 00:29:09.255 [2024-11-18 08:03:02.248276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1501610 is same with the state(6) to be set 00:29:09.255 [2024-11-18 08:03:02.248354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.255 [2024-11-18 08:03:02.248381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe2610 with addr=10.0.0.2, port=4420 00:29:09.255 [2024-11-18 08:03:02.248398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe2610 is same with the state(6) to be set 00:29:09.255 [2024-11-18 08:03:02.250034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:09.255 [2024-11-18 08:03:02.250076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:09.255 [2024-11-18 08:03:02.250096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:09.255 [2024-11-18 08:03:02.250115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:09.255 [2024-11-18 08:03:02.250261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.255 [2024-11-18 08:03:02.250290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1529870 with addr=10.0.0.2, port=4420 00:29:09.255 [2024-11-18 08:03:02.250307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529870 is same with the state(6) to be set 00:29:09.255 [2024-11-18 08:03:02.250399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.255 [2024-11-18 08:03:02.250426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1528e50 with addr=10.0.0.2, port=4420 00:29:09.255 [2024-11-18 08:03:02.250442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1528e50 is same with the state(6) to be set 00:29:09.255 [2024-11-18 08:03:02.250469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d71c0 (9): Bad file descriptor 00:29:09.255 [2024-11-18 08:03:02.250503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d4ed0 (9): Bad file descriptor 00:29:09.255 [2024-11-18 08:03:02.250525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1501610 (9): Bad file descriptor 00:29:09.255 [2024-11-18 08:03:02.250544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe2610 (9): Bad file descriptor 00:29:09.255 [2024-11-18 08:03:02.250598] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:29:09.255 [2024-11-18 08:03:02.250624] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:09.255 [2024-11-18 08:03:02.250644] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:09.255 [2024-11-18 08:03:02.250664] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:09.255 [2024-11-18 08:03:02.251118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.255 [2024-11-18 08:03:02.251149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d7620 with addr=10.0.0.2, port=4420 00:29:09.255 [2024-11-18 08:03:02.251166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d7620 is same with the state(6) to be set 00:29:09.255 [2024-11-18 08:03:02.251248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.255 [2024-11-18 08:03:02.251274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1502990 with addr=10.0.0.2, port=4420 00:29:09.255 [2024-11-18 08:03:02.251290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502990 is same with the state(6) to be set 00:29:09.255 [2024-11-18 08:03:02.251367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.255 [2024-11-18 08:03:02.251394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15316a0 with addr=10.0.0.2, port=4420 00:29:09.255 [2024-11-18 08:03:02.251410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15316a0 is same with the state(6) to be set 00:29:09.255 [2024-11-18 08:03:02.251496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.255 [2024-11-18 08:03:02.251524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1529030 with addr=10.0.0.2, port=4420 00:29:09.255 [2024-11-18 08:03:02.251545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529030 is same with the state(6) to be set 00:29:09.255 [2024-11-18 08:03:02.251565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1529870 (9): Bad file descriptor 00:29:09.255 [2024-11-18 08:03:02.251585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1528e50 (9): Bad file descriptor 00:29:09.255 [2024-11-18 08:03:02.251604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:09.255 [2024-11-18 08:03:02.251619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:09.255 [2024-11-18 08:03:02.251636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:09.255 [2024-11-18 08:03:02.251653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:09.255 [2024-11-18 08:03:02.251670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:09.255 [2024-11-18 08:03:02.251683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:09.255 [2024-11-18 08:03:02.251697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:09.255 [2024-11-18 08:03:02.251710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:09.255 [2024-11-18 08:03:02.251724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:09.255 [2024-11-18 08:03:02.251736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:09.255 [2024-11-18 08:03:02.251749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:09.255 [2024-11-18 08:03:02.251762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:09.255 [2024-11-18 08:03:02.251776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:09.255 [2024-11-18 08:03:02.251789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:09.255 [2024-11-18 08:03:02.251802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:09.255 [2024-11-18 08:03:02.251814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:09.255 [2024-11-18 08:03:02.251928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d7620 (9): Bad file descriptor 00:29:09.256 [2024-11-18 08:03:02.251955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1502990 (9): Bad file descriptor 00:29:09.256 [2024-11-18 08:03:02.251974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15316a0 (9): Bad file descriptor 00:29:09.256 [2024-11-18 08:03:02.251993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1529030 (9): Bad file descriptor 00:29:09.256 [2024-11-18 08:03:02.252010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:09.256 [2024-11-18 08:03:02.252024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:09.256 [2024-11-18 08:03:02.252038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:09.256 [2024-11-18 08:03:02.252051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:09.256 [2024-11-18 08:03:02.252065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:09.256 [2024-11-18 08:03:02.252083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:09.256 [2024-11-18 08:03:02.252097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:09.256 [2024-11-18 08:03:02.252110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:09.256 [2024-11-18 08:03:02.252149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:09.256 [2024-11-18 08:03:02.252167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:09.256 [2024-11-18 08:03:02.252181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:09.256 [2024-11-18 08:03:02.252195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:09.256 [2024-11-18 08:03:02.252210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:09.256 [2024-11-18 08:03:02.252223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:09.256 [2024-11-18 08:03:02.252236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:09.256 [2024-11-18 08:03:02.252249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:09.256 [2024-11-18 08:03:02.252262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:09.256 [2024-11-18 08:03:02.252275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:09.256 [2024-11-18 08:03:02.252288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:09.256 [2024-11-18 08:03:02.252301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:09.256 [2024-11-18 08:03:02.252315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:09.256 [2024-11-18 08:03:02.252328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:09.256 [2024-11-18 08:03:02.252341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:09.256 [2024-11-18 08:03:02.252354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:09.822 08:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:10.760 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 818873 00:29:10.760 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:10.760 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 818873 00:29:10.760 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:10.760 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:10.760 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 818873 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:10.761 rmmod nvme_tcp 00:29:10.761 rmmod nvme_fabrics 00:29:10.761 rmmod nvme_keyring 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 818701 ']' 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 818701 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 818701 ']' 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 818701 00:29:10.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (818701) - No such process 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 818701 is not found' 00:29:10.761 Process with pid 818701 is not found 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:10.761 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.687 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:12.687 00:29:12.687 real 0m7.439s 00:29:12.687 user 0m18.470s 00:29:12.687 sys 0m1.429s 00:29:12.687 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:12.687 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:12.687 ************************************ 00:29:12.687 END TEST nvmf_shutdown_tc3 00:29:12.687 ************************************ 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:12.948 ************************************ 00:29:12.948 START TEST nvmf_shutdown_tc4 00:29:12.948 ************************************ 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:12.948 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:12.949 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:12.949 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:12.949 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:12.949 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:12.949 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:13.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:29:13.209 00:29:13.209 --- 10.0.0.2 ping statistics --- 00:29:13.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.209 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:29:13.209 00:29:13.209 --- 10.0.0.1 ping statistics --- 00:29:13.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.209 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=819894 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 819894 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 819894 ']' 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.209 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:13.209 [2024-11-18 08:03:06.138885] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:13.209 [2024-11-18 08:03:06.138960] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.209 [2024-11-18 08:03:06.213119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:13.209 [2024-11-18 08:03:06.260922] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.209 [2024-11-18 08:03:06.260969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.209 [2024-11-18 08:03:06.260992] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.209 [2024-11-18 08:03:06.261017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.209 [2024-11-18 08:03:06.261027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.209 [2024-11-18 08:03:06.262571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:13.209 [2024-11-18 08:03:06.262634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:13.209 [2024-11-18 08:03:06.262683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:13.209 [2024-11-18 08:03:06.262685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:13.469 [2024-11-18 08:03:06.408142] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.469 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:13.469 Malloc1 00:29:13.469 [2024-11-18 08:03:06.515554] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.469 Malloc2 00:29:13.729 Malloc3 00:29:13.729 Malloc4 00:29:13.729 Malloc5 00:29:13.729 Malloc6 00:29:13.729 Malloc7 00:29:13.988 Malloc8 00:29:13.988 Malloc9 00:29:13.988 Malloc10 00:29:13.988 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.988 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:13.988 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:13.988 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:13.988 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=819958 00:29:13.988 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:13.988 08:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:13.988 [2024-11-18 08:03:07.052222] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:19.262 08:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:19.262 08:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 819894 00:29:19.262 08:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 819894 ']' 00:29:19.263 08:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 819894 00:29:19.263 08:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:19.263 08:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:19.263 08:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 819894 00:29:19.263 08:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:19.263 08:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:19.263 08:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 819894' 00:29:19.263 killing process with pid 819894 00:29:19.263 08:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 819894 00:29:19.263 08:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 819894 00:29:19.263 [2024-11-18 08:03:12.038234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a1930 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.038353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a1930 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.038408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a1930 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.038429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a1930 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.038443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a1930 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.038455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a1930 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.038468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a1930 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.038529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a1930 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.039146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a1e00 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.039182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a1e00 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.039199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a1e00 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.039212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a1e00 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.039224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a1e00 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.039237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a1e00 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.039249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a1e00 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.040439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a22d0 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.040475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a22d0 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.040509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a22d0 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.040523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a22d0 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.040536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a22d0 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.040548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a22d0 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.047222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a3fb0 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.047285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a3fb0 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.047305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a3fb0 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.047319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a3fb0 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.047332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a3fb0 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.047346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a3fb0 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.047358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a3fb0 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.047385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a3fb0 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.048070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a4480 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.048106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a4480 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.048126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a4480 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.048138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a4480 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.048150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a4480 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.048163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a4480 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.048175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a4480 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.048187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a4480 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.048199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a4480 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.048212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a4480 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.051819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e44be0 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.051874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e44be0 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.051890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e44be0 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.051903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e44be0 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.051921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e44be0 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.051933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e44be0 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.051946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e44be0 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.051958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e44be0 is same with the state(6) to be set 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 starting I/O failed: -6 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 starting I/O failed: -6 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 starting I/O failed: -6 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 starting I/O failed: -6 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 [2024-11-18 08:03:12.055218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48400 is same with the state(6) to be set 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 [2024-11-18 08:03:12.055265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48400 is same with starting I/O failed: -6 00:29:19.263 the state(6) to be set 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 [2024-11-18 08:03:12.055301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48400 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.055315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48400 is same with the state(6) to be set 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 [2024-11-18 08:03:12.055328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48400 is same with the state(6) to be set 00:29:19.263 [2024-11-18 08:03:12.055340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48400 is same with the state(6) to be set 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 starting I/O failed: -6 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 Write completed with error (sct=0, sc=8) 00:29:19.263 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 [2024-11-18 08:03:12.055601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e488f0 is same with the state(6) to be set 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 [2024-11-18 08:03:12.055633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e488f0 is same with the state(6) to be set 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 [2024-11-18 08:03:12.055649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e488f0 is same with the state(6) to be set 00:29:19.264 [2024-11-18 08:03:12.055662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e488f0 is same with the state(6) to be set 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 [2024-11-18 08:03:12.055674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e488f0 is same with the state(6) to be set 00:29:19.264 starting I/O failed: -6 00:29:19.264 [2024-11-18 08:03:12.055687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e488f0 is same with the state(6) to be set 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 [2024-11-18 08:03:12.055699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e488f0 is same with the state(6) to be set 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 [2024-11-18 08:03:12.055712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e488f0 is same with the state(6) to be set 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 [2024-11-18 08:03:12.055801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 [2024-11-18 08:03:12.056388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48dc0 is same with the state(6) to be set 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 [2024-11-18 08:03:12.056428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48dc0 is same with the state(6) to be set 00:29:19.264 starting I/O failed: -6 00:29:19.264 [2024-11-18 08:03:12.056443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48dc0 is same with the state(6) to be set 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 [2024-11-18 08:03:12.056464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48dc0 is same with the state(6) to be set 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 [2024-11-18 08:03:12.056477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48dc0 is same with the state(6) to be set 00:29:19.264 starting I/O failed: -6 00:29:19.264 [2024-11-18 08:03:12.056497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48dc0 is same with Write completed with error (sct=0, sc=8) 00:29:19.264 the state(6) to be set 00:29:19.264 [2024-11-18 08:03:12.056514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48dc0 is same with the state(6) to be set 00:29:19.264 [2024-11-18 08:03:12.056529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48dc0 is same with Write completed with error (sct=0, sc=8) 00:29:19.264 the state(6) to be set 00:29:19.264 starting I/O failed: -6 00:29:19.264 [2024-11-18 08:03:12.056543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48dc0 is same with the state(6) to be set 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 [2024-11-18 08:03:12.057011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f30 is same with the state(6) to be set 00:29:19.264 [2024-11-18 08:03:12.057044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f30 is same with the state(6) to be set 00:29:19.264 [2024-11-18 08:03:12.057060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f30 is same with the state(6) to be set 00:29:19.264 [2024-11-18 08:03:12.057057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.264 [2024-11-18 08:03:12.057074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f30 is same with the state(6) to be set 00:29:19.264 [2024-11-18 08:03:12.057087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f30 is same with the state(6) to be set 00:29:19.264 [2024-11-18 08:03:12.057098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f30 is same with the state(6) to be set 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 [2024-11-18 08:03:12.057471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e45e30 is same with the state(6) to be set 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 [2024-11-18 08:03:12.057514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e45e30 is same with the state(6) to be set 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 [2024-11-18 08:03:12.057531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e45e30 is same with the state(6) to be set 00:29:19.264 starting I/O failed: -6 00:29:19.264 [2024-11-18 08:03:12.057544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e45e30 is same with the state(6) to be set 00:29:19.264 Write completed with error (sct=0, sc=8) 00:29:19.264 starting I/O failed: -6 00:29:19.264 [2024-11-18 08:03:12.057557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e45e30 is same with the state(6) to be set 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 [2024-11-18 08:03:12.057570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e45e30 is same with the state(6) to be set 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 [2024-11-18 08:03:12.058157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 Write completed with error (sct=0, sc=8) 00:29:19.265 starting I/O failed: -6 00:29:19.265 [2024-11-18 08:03:12.059831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.265 NVMe io qpair process completion error 00:29:19.266 [2024-11-18 08:03:12.061960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6f40 is same with the state(6) to be set 00:29:19.266 [2024-11-18 08:03:12.061989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6f40 is same with the state(6) to be set 00:29:19.266 [2024-11-18 08:03:12.062755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b78e0 is same with the state(6) to be set 00:29:19.266 [2024-11-18 08:03:12.062813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b78e0 is same with the state(6) to be set 00:29:19.266 [2024-11-18 08:03:12.062839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b78e0 is same with Write completed with error (sct=0, sc=8) 00:29:19.266 the state(6) to be set 00:29:19.266 starting I/O failed: -6 00:29:19.266 [2024-11-18 08:03:12.062856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b78e0 is same with the state(6) to be set 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 [2024-11-18 08:03:12.062869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b78e0 is same with the state(6) to be set 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 [2024-11-18 08:03:12.063022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6a70 is same with the state(6) to be set 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 [2024-11-18 08:03:12.063520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.266 NVMe io qpair process completion error 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 [2024-11-18 08:03:12.064833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:19.266 starting I/O failed: -6 00:29:19.266 starting I/O failed: -6 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 [2024-11-18 08:03:12.065909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.266 starting I/O failed: -6 00:29:19.266 Write completed with error (sct=0, sc=8) 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 [2024-11-18 08:03:12.067104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.267 Write completed with error (sct=0, sc=8) 00:29:19.267 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 [2024-11-18 08:03:12.068725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:19.268 NVMe io qpair process completion error 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 [2024-11-18 08:03:12.069994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 [2024-11-18 08:03:12.070932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.268 starting I/O failed: -6 00:29:19.268 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 [2024-11-18 08:03:12.072129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.269 Write completed with error (sct=0, sc=8) 00:29:19.269 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 [2024-11-18 08:03:12.074146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:19.270 NVMe io qpair process completion error 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 [2024-11-18 08:03:12.075505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 [2024-11-18 08:03:12.076597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.270 starting I/O failed: -6 00:29:19.270 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 [2024-11-18 08:03:12.077698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 [2024-11-18 08:03:12.081627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.271 NVMe io qpair process completion error 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 Write completed with error (sct=0, sc=8) 00:29:19.271 starting I/O failed: -6 00:29:19.272 [2024-11-18 08:03:12.082985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 [2024-11-18 08:03:12.083977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 [2024-11-18 08:03:12.085151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.272 Write completed with error (sct=0, sc=8) 00:29:19.272 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 [2024-11-18 08:03:12.087569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.273 NVMe io qpair process completion error 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 [2024-11-18 08:03:12.088895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.273 starting I/O failed: -6 00:29:19.273 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 [2024-11-18 08:03:12.090007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 [2024-11-18 08:03:12.091121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.274 Write completed with error (sct=0, sc=8) 00:29:19.274 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 [2024-11-18 08:03:12.093393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:19.275 NVMe io qpair process completion error 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 starting I/O failed: -6 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.275 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 [2024-11-18 08:03:12.094660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 [2024-11-18 08:03:12.095739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.276 Write completed with error (sct=0, sc=8) 00:29:19.276 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 [2024-11-18 08:03:12.096894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 [2024-11-18 08:03:12.098923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:19.277 NVMe io qpair process completion error 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 Write completed with error (sct=0, sc=8) 00:29:19.277 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 [2024-11-18 08:03:12.100904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 [2024-11-18 08:03:12.102138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.278 starting I/O failed: -6 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.278 starting I/O failed: -6 00:29:19.278 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 [2024-11-18 08:03:12.104166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:19.279 NVMe io qpair process completion error 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 [2024-11-18 08:03:12.105456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 Write completed with error (sct=0, sc=8) 00:29:19.279 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 [2024-11-18 08:03:12.106514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 [2024-11-18 08:03:12.107668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.280 starting I/O failed: -6 00:29:19.280 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 starting I/O failed: -6 00:29:19.281 [2024-11-18 08:03:12.111547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.281 NVMe io qpair process completion error 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.281 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Write completed with error (sct=0, sc=8) 00:29:19.282 Initializing NVMe Controllers 00:29:19.282 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:19.282 Controller IO queue size 128, less than required. 00:29:19.282 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:19.282 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:19.282 Controller IO queue size 128, less than required. 00:29:19.282 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:19.282 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:19.282 Controller IO queue size 128, less than required. 00:29:19.282 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:19.282 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:19.282 Controller IO queue size 128, less than required. 00:29:19.282 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:19.282 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:19.282 Controller IO queue size 128, less than required. 00:29:19.282 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:19.282 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:19.282 Controller IO queue size 128, less than required. 00:29:19.282 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:19.282 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:19.282 Controller IO queue size 128, less than required. 00:29:19.282 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:19.282 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:19.282 Controller IO queue size 128, less than required. 00:29:19.282 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:19.282 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:19.282 Controller IO queue size 128, less than required. 00:29:19.282 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:19.282 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:19.282 Controller IO queue size 128, less than required. 00:29:19.282 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:19.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:19.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:19.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:19.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:19.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:19.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:19.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:19.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:19.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:19.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:19.282 Initialization complete. Launching workers. 00:29:19.282 ======================================================== 00:29:19.282 Latency(us) 00:29:19.282 Device Information : IOPS MiB/s Average min max 00:29:19.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1875.33 80.58 68276.17 823.58 119829.77 00:29:19.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1702.88 73.17 75227.88 873.74 131925.48 00:29:19.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1708.29 73.40 75022.66 892.34 134666.27 00:29:19.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1739.27 74.73 73713.05 1245.08 137918.73 00:29:19.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1799.50 77.32 71274.70 1078.45 125426.31 00:29:19.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1822.04 78.29 69585.52 1162.87 127278.52 00:29:19.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1719.56 73.89 74495.32 1246.04 140315.49 00:29:19.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1770.04 76.06 71654.31 749.14 127780.01 00:29:19.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1710.89 73.51 74154.31 952.30 128154.02 00:29:19.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1716.53 73.76 73942.74 915.46 127997.71 00:29:19.282 ======================================================== 00:29:19.282 Total : 17564.33 754.72 72663.35 749.14 140315.49 00:29:19.282 00:29:19.282 [2024-11-18 08:03:12.120307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a68370 is same with the state(6) to be set 00:29:19.282 [2024-11-18 08:03:12.120413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a689d0 is same with the state(6) to be set 00:29:19.282 [2024-11-18 08:03:12.120474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6a470 is same with the state(6) to be set 00:29:19.282 [2024-11-18 08:03:12.120564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a686a0 is same with the state(6) to be set 00:29:19.282 [2024-11-18 08:03:12.120623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6a7a0 is same with the state(6) to be set 00:29:19.282 [2024-11-18 08:03:12.120682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67fb0 is same with the state(6) to be set 00:29:19.282 [2024-11-18 08:03:12.120739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6a140 is same with the state(6) to be set 00:29:19.282 [2024-11-18 08:03:12.120797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6db30 is same with the state(6) to be set 00:29:19.282 [2024-11-18 08:03:12.120854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a69e10 is same with the state(6) to be set 00:29:19.282 [2024-11-18 08:03:12.120935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a68190 is same with the state(6) to be set 00:29:19.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:19.542 08:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 819958 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 819958 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 819958 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:20.483 rmmod nvme_tcp 00:29:20.483 rmmod nvme_fabrics 00:29:20.483 rmmod nvme_keyring 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 819894 ']' 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 819894 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 819894 ']' 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 819894 00:29:20.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (819894) - No such process 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 819894 is not found' 00:29:20.483 Process with pid 819894 is not found 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:20.483 08:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:23.019 00:29:23.019 real 0m9.785s 00:29:23.019 user 0m23.659s 00:29:23.019 sys 0m5.606s 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:23.019 ************************************ 00:29:23.019 END TEST nvmf_shutdown_tc4 00:29:23.019 ************************************ 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:23.019 00:29:23.019 real 0m36.762s 00:29:23.019 user 1m38.521s 00:29:23.019 sys 0m11.935s 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:23.019 ************************************ 00:29:23.019 END TEST nvmf_shutdown 00:29:23.019 ************************************ 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:23.019 ************************************ 00:29:23.019 START TEST nvmf_nsid 00:29:23.019 ************************************ 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:23.019 * Looking for test storage... 00:29:23.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:23.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.019 --rc genhtml_branch_coverage=1 00:29:23.019 --rc genhtml_function_coverage=1 00:29:23.019 --rc genhtml_legend=1 00:29:23.019 --rc geninfo_all_blocks=1 00:29:23.019 --rc geninfo_unexecuted_blocks=1 00:29:23.019 00:29:23.019 ' 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:23.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.019 --rc genhtml_branch_coverage=1 00:29:23.019 --rc genhtml_function_coverage=1 00:29:23.019 --rc genhtml_legend=1 00:29:23.019 --rc geninfo_all_blocks=1 00:29:23.019 --rc geninfo_unexecuted_blocks=1 00:29:23.019 00:29:23.019 ' 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:23.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.019 --rc genhtml_branch_coverage=1 00:29:23.019 --rc genhtml_function_coverage=1 00:29:23.019 --rc genhtml_legend=1 00:29:23.019 --rc geninfo_all_blocks=1 00:29:23.019 --rc geninfo_unexecuted_blocks=1 00:29:23.019 00:29:23.019 ' 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:23.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.019 --rc genhtml_branch_coverage=1 00:29:23.019 --rc genhtml_function_coverage=1 00:29:23.019 --rc genhtml_legend=1 00:29:23.019 --rc geninfo_all_blocks=1 00:29:23.019 --rc geninfo_unexecuted_blocks=1 00:29:23.019 00:29:23.019 ' 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.019 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:23.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:23.020 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:24.927 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.927 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:24.928 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:24.928 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:24.928 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.928 08:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:25.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:29:25.187 00:29:25.187 --- 10.0.0.2 ping statistics --- 00:29:25.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.187 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:29:25.187 00:29:25.187 --- 10.0.0.1 ping statistics --- 00:29:25.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.187 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=823199 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 823199 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 823199 ']' 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.187 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:25.187 [2024-11-18 08:03:18.230484] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:25.187 [2024-11-18 08:03:18.230568] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.446 [2024-11-18 08:03:18.302636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.446 [2024-11-18 08:03:18.349448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.446 [2024-11-18 08:03:18.349526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.446 [2024-11-18 08:03:18.349542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.446 [2024-11-18 08:03:18.349569] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.446 [2024-11-18 08:03:18.349580] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.446 [2024-11-18 08:03:18.350155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=823244 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=256ee17d-80dc-4d36-bfce-51e204032170 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=2570d998-3610-418e-84ca-a4ec71fc1dff 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=4f7016a1-9288-4cf4-ad51-ef776612e0e4 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.446 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:25.446 null0 00:29:25.446 null1 00:29:25.446 null2 00:29:25.446 [2024-11-18 08:03:18.523749] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.446 [2024-11-18 08:03:18.532142] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:25.446 [2024-11-18 08:03:18.532217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid823244 ] 00:29:25.705 [2024-11-18 08:03:18.547991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.705 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.705 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 823244 /var/tmp/tgt2.sock 00:29:25.705 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 823244 ']' 00:29:25.705 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:25.705 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.705 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:25.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:25.705 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.705 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:25.705 [2024-11-18 08:03:18.600708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.705 [2024-11-18 08:03:18.646107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.963 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.963 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:25.963 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:26.549 [2024-11-18 08:03:19.351461] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.549 [2024-11-18 08:03:19.367684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:29:26.549 nvme0n1 nvme0n2 00:29:26.549 nvme1n1 00:29:26.549 08:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:26.549 08:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:26.549 08:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:27.114 08:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:27.114 08:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:27.114 08:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:27.114 08:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:27.114 08:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:29:27.114 08:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:27.114 08:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:27.114 08:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:27.114 08:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:27.114 08:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:27.114 08:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:29:27.114 08:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:29:27.114 08:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:29:28.050 08:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:28.050 08:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:28.050 08:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:28.050 08:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 256ee17d-80dc-4d36-bfce-51e204032170 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=256ee17d80dc4d36bfce51e204032170 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 256EE17D80DC4D36BFCE51E204032170 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 256EE17D80DC4D36BFCE51E204032170 == \2\5\6\E\E\1\7\D\8\0\D\C\4\D\3\6\B\F\C\E\5\1\E\2\0\4\0\3\2\1\7\0 ]] 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 2570d998-3610-418e-84ca-a4ec71fc1dff 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2570d9983610418e84caa4ec71fc1dff 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2570D9983610418E84CAA4EC71FC1DFF 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 2570D9983610418E84CAA4EC71FC1DFF == \2\5\7\0\D\9\9\8\3\6\1\0\4\1\8\E\8\4\C\A\A\4\E\C\7\1\F\C\1\D\F\F ]] 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 4f7016a1-9288-4cf4-ad51-ef776612e0e4 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:28.050 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:28.309 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4f7016a192884cf4ad51ef776612e0e4 00:29:28.309 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4F7016A192884CF4AD51EF776612E0E4 00:29:28.309 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 4F7016A192884CF4AD51EF776612E0E4 == \4\F\7\0\1\6\A\1\9\2\8\8\4\C\F\4\A\D\5\1\E\F\7\7\6\6\1\2\E\0\E\4 ]] 00:29:28.309 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:28.309 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:28.309 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:28.309 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 823244 00:29:28.309 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 823244 ']' 00:29:28.309 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 823244 00:29:28.309 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:28.309 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:28.309 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 823244 00:29:28.309 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:28.309 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:28.309 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 823244' 00:29:28.309 killing process with pid 823244 00:29:28.309 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 823244 00:29:28.309 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 823244 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:28.878 rmmod nvme_tcp 00:29:28.878 rmmod nvme_fabrics 00:29:28.878 rmmod nvme_keyring 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 823199 ']' 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 823199 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 823199 ']' 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 823199 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 823199 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 823199' 00:29:28.878 killing process with pid 823199 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 823199 00:29:28.878 08:03:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 823199 00:29:29.139 08:03:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:29.139 08:03:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:29.139 08:03:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:29.139 08:03:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:29:29.139 08:03:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:29:29.139 08:03:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:29.139 08:03:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:29:29.139 08:03:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:29.139 08:03:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:29.139 08:03:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.139 08:03:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.139 08:03:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.047 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:31.047 00:29:31.047 real 0m8.421s 00:29:31.047 user 0m8.397s 00:29:31.047 sys 0m2.561s 00:29:31.047 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:31.047 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:31.047 ************************************ 00:29:31.047 END TEST nvmf_nsid 00:29:31.047 ************************************ 00:29:31.047 08:03:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:31.047 00:29:31.047 real 18m7.100s 00:29:31.047 user 50m29.012s 00:29:31.047 sys 3m56.778s 00:29:31.047 08:03:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:31.047 08:03:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:31.047 ************************************ 00:29:31.047 END TEST nvmf_target_extra 00:29:31.047 ************************************ 00:29:31.306 08:03:24 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:31.306 08:03:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:31.306 08:03:24 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:31.306 08:03:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:31.306 ************************************ 00:29:31.306 START TEST nvmf_host 00:29:31.306 ************************************ 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:31.307 * Looking for test storage... 00:29:31.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:31.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.307 --rc genhtml_branch_coverage=1 00:29:31.307 --rc genhtml_function_coverage=1 00:29:31.307 --rc genhtml_legend=1 00:29:31.307 --rc geninfo_all_blocks=1 00:29:31.307 --rc geninfo_unexecuted_blocks=1 00:29:31.307 00:29:31.307 ' 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:31.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.307 --rc genhtml_branch_coverage=1 00:29:31.307 --rc genhtml_function_coverage=1 00:29:31.307 --rc genhtml_legend=1 00:29:31.307 --rc geninfo_all_blocks=1 00:29:31.307 --rc geninfo_unexecuted_blocks=1 00:29:31.307 00:29:31.307 ' 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:31.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.307 --rc genhtml_branch_coverage=1 00:29:31.307 --rc genhtml_function_coverage=1 00:29:31.307 --rc genhtml_legend=1 00:29:31.307 --rc geninfo_all_blocks=1 00:29:31.307 --rc geninfo_unexecuted_blocks=1 00:29:31.307 00:29:31.307 ' 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:31.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.307 --rc genhtml_branch_coverage=1 00:29:31.307 --rc genhtml_function_coverage=1 00:29:31.307 --rc genhtml_legend=1 00:29:31.307 --rc geninfo_all_blocks=1 00:29:31.307 --rc geninfo_unexecuted_blocks=1 00:29:31.307 00:29:31.307 ' 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:31.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.307 ************************************ 00:29:31.307 START TEST nvmf_multicontroller 00:29:31.307 ************************************ 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:31.307 * Looking for test storage... 00:29:31.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:31.307 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:31.568 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:31.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.569 --rc genhtml_branch_coverage=1 00:29:31.569 --rc genhtml_function_coverage=1 00:29:31.569 --rc genhtml_legend=1 00:29:31.569 --rc geninfo_all_blocks=1 00:29:31.569 --rc geninfo_unexecuted_blocks=1 00:29:31.569 00:29:31.569 ' 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:31.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.569 --rc genhtml_branch_coverage=1 00:29:31.569 --rc genhtml_function_coverage=1 00:29:31.569 --rc genhtml_legend=1 00:29:31.569 --rc geninfo_all_blocks=1 00:29:31.569 --rc geninfo_unexecuted_blocks=1 00:29:31.569 00:29:31.569 ' 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:31.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.569 --rc genhtml_branch_coverage=1 00:29:31.569 --rc genhtml_function_coverage=1 00:29:31.569 --rc genhtml_legend=1 00:29:31.569 --rc geninfo_all_blocks=1 00:29:31.569 --rc geninfo_unexecuted_blocks=1 00:29:31.569 00:29:31.569 ' 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:31.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.569 --rc genhtml_branch_coverage=1 00:29:31.569 --rc genhtml_function_coverage=1 00:29:31.569 --rc genhtml_legend=1 00:29:31.569 --rc geninfo_all_blocks=1 00:29:31.569 --rc geninfo_unexecuted_blocks=1 00:29:31.569 00:29:31.569 ' 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:31.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:31.569 08:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.153 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:34.153 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:34.153 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:34.153 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:34.153 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:34.153 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:34.153 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:34.153 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:34.153 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:34.153 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:34.153 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:34.153 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:34.153 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:34.153 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:34.153 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:34.153 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:34.153 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:34.153 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:34.154 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:34.154 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:34.154 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:34.154 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:34.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:34.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:29:34.154 00:29:34.154 --- 10.0.0.2 ping statistics --- 00:29:34.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.154 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:34.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:34.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:29:34.154 00:29:34.154 --- 10.0.0.1 ping statistics --- 00:29:34.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.154 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=825781 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 825781 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 825781 ']' 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.154 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.155 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.155 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.155 08:03:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.155 [2024-11-18 08:03:26.837307] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:34.155 [2024-11-18 08:03:26.837407] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.155 [2024-11-18 08:03:26.909954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:34.155 [2024-11-18 08:03:26.953285] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.155 [2024-11-18 08:03:26.953344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.155 [2024-11-18 08:03:26.953366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.155 [2024-11-18 08:03:26.953377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.155 [2024-11-18 08:03:26.953387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.155 [2024-11-18 08:03:26.954823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.155 [2024-11-18 08:03:26.954889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.155 [2024-11-18 08:03:26.954892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.155 [2024-11-18 08:03:27.094761] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.155 Malloc0 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.155 [2024-11-18 08:03:27.153835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.155 [2024-11-18 08:03:27.161700] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.155 Malloc1 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=825804 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 825804 /var/tmp/bdevperf.sock 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 825804 ']' 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:34.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.155 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.722 NVMe0n1 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.722 1 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.722 request: 00:29:34.722 { 00:29:34.722 "name": "NVMe0", 00:29:34.722 "trtype": "tcp", 00:29:34.722 "traddr": "10.0.0.2", 00:29:34.722 "adrfam": "ipv4", 00:29:34.722 "trsvcid": "4420", 00:29:34.722 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.722 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:34.722 "hostaddr": "10.0.0.1", 00:29:34.722 "prchk_reftag": false, 00:29:34.722 "prchk_guard": false, 00:29:34.722 "hdgst": false, 00:29:34.722 "ddgst": false, 00:29:34.722 "allow_unrecognized_csi": false, 00:29:34.722 "method": "bdev_nvme_attach_controller", 00:29:34.722 "req_id": 1 00:29:34.722 } 00:29:34.722 Got JSON-RPC error response 00:29:34.722 response: 00:29:34.722 { 00:29:34.722 "code": -114, 00:29:34.722 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:34.722 } 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.722 request: 00:29:34.722 { 00:29:34.722 "name": "NVMe0", 00:29:34.722 "trtype": "tcp", 00:29:34.722 "traddr": "10.0.0.2", 00:29:34.722 "adrfam": "ipv4", 00:29:34.722 "trsvcid": "4420", 00:29:34.722 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:34.722 "hostaddr": "10.0.0.1", 00:29:34.722 "prchk_reftag": false, 00:29:34.722 "prchk_guard": false, 00:29:34.722 "hdgst": false, 00:29:34.722 "ddgst": false, 00:29:34.722 "allow_unrecognized_csi": false, 00:29:34.722 "method": "bdev_nvme_attach_controller", 00:29:34.722 "req_id": 1 00:29:34.722 } 00:29:34.722 Got JSON-RPC error response 00:29:34.722 response: 00:29:34.722 { 00:29:34.722 "code": -114, 00:29:34.722 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:34.722 } 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.722 request: 00:29:34.722 { 00:29:34.722 "name": "NVMe0", 00:29:34.722 "trtype": "tcp", 00:29:34.722 "traddr": "10.0.0.2", 00:29:34.722 "adrfam": "ipv4", 00:29:34.722 "trsvcid": "4420", 00:29:34.722 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.722 "hostaddr": "10.0.0.1", 00:29:34.722 "prchk_reftag": false, 00:29:34.722 "prchk_guard": false, 00:29:34.722 "hdgst": false, 00:29:34.722 "ddgst": false, 00:29:34.722 "multipath": "disable", 00:29:34.722 "allow_unrecognized_csi": false, 00:29:34.722 "method": "bdev_nvme_attach_controller", 00:29:34.722 "req_id": 1 00:29:34.722 } 00:29:34.722 Got JSON-RPC error response 00:29:34.722 response: 00:29:34.722 { 00:29:34.722 "code": -114, 00:29:34.722 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:34.722 } 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.722 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.722 request: 00:29:34.722 { 00:29:34.723 "name": "NVMe0", 00:29:34.723 "trtype": "tcp", 00:29:34.723 "traddr": "10.0.0.2", 00:29:34.723 "adrfam": "ipv4", 00:29:34.723 "trsvcid": "4420", 00:29:34.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.723 "hostaddr": "10.0.0.1", 00:29:34.723 "prchk_reftag": false, 00:29:34.723 "prchk_guard": false, 00:29:34.723 "hdgst": false, 00:29:34.723 "ddgst": false, 00:29:34.723 "multipath": "failover", 00:29:34.723 "allow_unrecognized_csi": false, 00:29:34.723 "method": "bdev_nvme_attach_controller", 00:29:34.723 "req_id": 1 00:29:34.723 } 00:29:34.723 Got JSON-RPC error response 00:29:34.723 response: 00:29:34.723 { 00:29:34.723 "code": -114, 00:29:34.723 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:34.723 } 00:29:34.723 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:34.723 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:34.723 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:34.723 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:34.723 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:34.723 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:34.723 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.723 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.723 NVMe0n1 00:29:34.723 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.723 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:34.723 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.723 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.981 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.981 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:34.981 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.981 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.981 00:29:34.981 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.981 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:34.981 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:34.981 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.981 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.981 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.981 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:34.981 08:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:36.362 { 00:29:36.362 "results": [ 00:29:36.362 { 00:29:36.362 "job": "NVMe0n1", 00:29:36.362 "core_mask": "0x1", 00:29:36.362 "workload": "write", 00:29:36.362 "status": "finished", 00:29:36.362 "queue_depth": 128, 00:29:36.362 "io_size": 4096, 00:29:36.362 "runtime": 1.005835, 00:29:36.362 "iops": 17678.843945577555, 00:29:36.362 "mibps": 69.05798416241232, 00:29:36.362 "io_failed": 0, 00:29:36.362 "io_timeout": 0, 00:29:36.362 "avg_latency_us": 7228.35541492229, 00:29:36.362 "min_latency_us": 4514.702222222222, 00:29:36.362 "max_latency_us": 13010.10962962963 00:29:36.362 } 00:29:36.362 ], 00:29:36.362 "core_count": 1 00:29:36.362 } 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 825804 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 825804 ']' 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 825804 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 825804 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 825804' 00:29:36.362 killing process with pid 825804 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 825804 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 825804 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:36.362 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:36.362 [2024-11-18 08:03:27.269588] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:36.362 [2024-11-18 08:03:27.269680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid825804 ] 00:29:36.362 [2024-11-18 08:03:27.342393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.362 [2024-11-18 08:03:27.389159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.362 [2024-11-18 08:03:27.883048] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 678134c4-a646-46cd-8917-816e08640177 already exists 00:29:36.362 [2024-11-18 08:03:27.883087] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:678134c4-a646-46cd-8917-816e08640177 alias for bdev NVMe1n1 00:29:36.362 [2024-11-18 08:03:27.883113] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:36.362 Running I/O for 1 seconds... 00:29:36.362 17654.00 IOPS, 68.96 MiB/s 00:29:36.362 Latency(us) 00:29:36.362 [2024-11-18T07:03:29.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.362 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:36.362 NVMe0n1 : 1.01 17678.84 69.06 0.00 0.00 7228.36 4514.70 13010.11 00:29:36.362 [2024-11-18T07:03:29.450Z] =================================================================================================================== 00:29:36.362 [2024-11-18T07:03:29.450Z] Total : 17678.84 69.06 0.00 0.00 7228.36 4514.70 13010.11 00:29:36.362 Received shutdown signal, test time was about 1.000000 seconds 00:29:36.362 00:29:36.362 Latency(us) 00:29:36.362 [2024-11-18T07:03:29.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.362 [2024-11-18T07:03:29.450Z] =================================================================================================================== 00:29:36.362 [2024-11-18T07:03:29.450Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:36.362 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:36.362 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:36.363 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:36.363 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:36.363 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:36.363 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:36.363 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:36.363 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:36.363 rmmod nvme_tcp 00:29:36.363 rmmod nvme_fabrics 00:29:36.363 rmmod nvme_keyring 00:29:36.363 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:36.363 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:36.363 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:36.363 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 825781 ']' 00:29:36.363 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 825781 00:29:36.363 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 825781 ']' 00:29:36.363 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 825781 00:29:36.363 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:36.363 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:36.363 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 825781 00:29:36.363 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:36.363 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:36.363 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 825781' 00:29:36.363 killing process with pid 825781 00:29:36.363 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 825781 00:29:36.363 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 825781 00:29:36.621 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:36.621 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:36.621 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:36.621 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:36.621 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:36.621 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:36.621 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:36.621 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:36.621 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:36.621 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.621 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:36.621 08:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:39.157 00:29:39.157 real 0m7.318s 00:29:39.157 user 0m10.915s 00:29:39.157 sys 0m2.418s 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:39.157 ************************************ 00:29:39.157 END TEST nvmf_multicontroller 00:29:39.157 ************************************ 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.157 ************************************ 00:29:39.157 START TEST nvmf_aer 00:29:39.157 ************************************ 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:39.157 * Looking for test storage... 00:29:39.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:39.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.157 --rc genhtml_branch_coverage=1 00:29:39.157 --rc genhtml_function_coverage=1 00:29:39.157 --rc genhtml_legend=1 00:29:39.157 --rc geninfo_all_blocks=1 00:29:39.157 --rc geninfo_unexecuted_blocks=1 00:29:39.157 00:29:39.157 ' 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:39.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.157 --rc genhtml_branch_coverage=1 00:29:39.157 --rc genhtml_function_coverage=1 00:29:39.157 --rc genhtml_legend=1 00:29:39.157 --rc geninfo_all_blocks=1 00:29:39.157 --rc geninfo_unexecuted_blocks=1 00:29:39.157 00:29:39.157 ' 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:39.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.157 --rc genhtml_branch_coverage=1 00:29:39.157 --rc genhtml_function_coverage=1 00:29:39.157 --rc genhtml_legend=1 00:29:39.157 --rc geninfo_all_blocks=1 00:29:39.157 --rc geninfo_unexecuted_blocks=1 00:29:39.157 00:29:39.157 ' 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:39.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.157 --rc genhtml_branch_coverage=1 00:29:39.157 --rc genhtml_function_coverage=1 00:29:39.157 --rc genhtml_legend=1 00:29:39.157 --rc geninfo_all_blocks=1 00:29:39.157 --rc geninfo_unexecuted_blocks=1 00:29:39.157 00:29:39.157 ' 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.157 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:39.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:39.158 08:03:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:41.062 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:41.062 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:41.062 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:41.062 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:41.062 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.063 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:41.063 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:41.063 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:41.063 08:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:41.063 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:41.063 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:41.063 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:41.063 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:41.063 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:41.063 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:41.063 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:41.063 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:41.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:29:41.323 00:29:41.323 --- 10.0.0.2 ping statistics --- 00:29:41.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.323 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:41.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:41.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:29:41.323 00:29:41.323 --- 10.0.0.1 ping statistics --- 00:29:41.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.323 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=828022 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 828022 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 828022 ']' 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.323 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:41.323 [2024-11-18 08:03:34.234639] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:41.323 [2024-11-18 08:03:34.234718] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.323 [2024-11-18 08:03:34.307326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:41.323 [2024-11-18 08:03:34.356913] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.323 [2024-11-18 08:03:34.356972] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.323 [2024-11-18 08:03:34.356986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.323 [2024-11-18 08:03:34.356997] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.323 [2024-11-18 08:03:34.357006] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.323 [2024-11-18 08:03:34.358595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.323 [2024-11-18 08:03:34.358658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:41.323 [2024-11-18 08:03:34.358720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:41.323 [2024-11-18 08:03:34.358723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:41.584 [2024-11-18 08:03:34.508652] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:41.584 Malloc0 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:41.584 [2024-11-18 08:03:34.573120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:41.584 [ 00:29:41.584 { 00:29:41.584 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:41.584 "subtype": "Discovery", 00:29:41.584 "listen_addresses": [], 00:29:41.584 "allow_any_host": true, 00:29:41.584 "hosts": [] 00:29:41.584 }, 00:29:41.584 { 00:29:41.584 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:41.584 "subtype": "NVMe", 00:29:41.584 "listen_addresses": [ 00:29:41.584 { 00:29:41.584 "trtype": "TCP", 00:29:41.584 "adrfam": "IPv4", 00:29:41.584 "traddr": "10.0.0.2", 00:29:41.584 "trsvcid": "4420" 00:29:41.584 } 00:29:41.584 ], 00:29:41.584 "allow_any_host": true, 00:29:41.584 "hosts": [], 00:29:41.584 "serial_number": "SPDK00000000000001", 00:29:41.584 "model_number": "SPDK bdev Controller", 00:29:41.584 "max_namespaces": 2, 00:29:41.584 "min_cntlid": 1, 00:29:41.584 "max_cntlid": 65519, 00:29:41.584 "namespaces": [ 00:29:41.584 { 00:29:41.584 "nsid": 1, 00:29:41.584 "bdev_name": "Malloc0", 00:29:41.584 "name": "Malloc0", 00:29:41.584 "nguid": "80A4EB4503244DB38F62A4A49363C2E1", 00:29:41.584 "uuid": "80a4eb45-0324-4db3-8f62-a4a49363c2e1" 00:29:41.584 } 00:29:41.584 ] 00:29:41.584 } 00:29:41.584 ] 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=828137 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:41.584 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:41.845 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:41.845 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:41.845 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:41.845 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:41.845 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:41.845 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:29:41.845 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:29:41.845 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:41.845 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:41.845 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 3 -lt 200 ']' 00:29:41.845 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=4 00:29:41.845 08:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.104 Malloc1 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.104 [ 00:29:42.104 { 00:29:42.104 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:42.104 "subtype": "Discovery", 00:29:42.104 "listen_addresses": [], 00:29:42.104 "allow_any_host": true, 00:29:42.104 "hosts": [] 00:29:42.104 }, 00:29:42.104 { 00:29:42.104 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:42.104 "subtype": "NVMe", 00:29:42.104 "listen_addresses": [ 00:29:42.104 { 00:29:42.104 "trtype": "TCP", 00:29:42.104 "adrfam": "IPv4", 00:29:42.104 "traddr": "10.0.0.2", 00:29:42.104 "trsvcid": "4420" 00:29:42.104 } 00:29:42.104 ], 00:29:42.104 "allow_any_host": true, 00:29:42.104 "hosts": [], 00:29:42.104 "serial_number": "SPDK00000000000001", 00:29:42.104 "model_number": "SPDK bdev Controller", 00:29:42.104 "max_namespaces": 2, 00:29:42.104 "min_cntlid": 1, 00:29:42.104 "max_cntlid": 65519, 00:29:42.104 "namespaces": [ 00:29:42.104 { 00:29:42.104 "nsid": 1, 00:29:42.104 "bdev_name": "Malloc0", 00:29:42.104 "name": "Malloc0", 00:29:42.104 "nguid": "80A4EB4503244DB38F62A4A49363C2E1", 00:29:42.104 "uuid": "80a4eb45-0324-4db3-8f62-a4a49363c2e1" 00:29:42.104 }, 00:29:42.104 { 00:29:42.104 "nsid": 2, 00:29:42.104 "bdev_name": "Malloc1", 00:29:42.104 "name": "Malloc1", 00:29:42.104 "nguid": "557CD05439DB4FC0B3B3611B4DF0CDBE", 00:29:42.104 "uuid": "557cd054-39db-4fc0-b3b3-611b4df0cdbe" 00:29:42.104 } 00:29:42.104 ] 00:29:42.104 } 00:29:42.104 ] 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 828137 00:29:42.104 Asynchronous Event Request test 00:29:42.104 Attaching to 10.0.0.2 00:29:42.104 Attached to 10.0.0.2 00:29:42.104 Registering asynchronous event callbacks... 00:29:42.104 Starting namespace attribute notice tests for all controllers... 00:29:42.104 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:42.104 aer_cb - Changed Namespace 00:29:42.104 Cleaning up... 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:42.104 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:42.105 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:42.105 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:42.105 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:42.105 rmmod nvme_tcp 00:29:42.105 rmmod nvme_fabrics 00:29:42.105 rmmod nvme_keyring 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 828022 ']' 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 828022 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 828022 ']' 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 828022 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 828022 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 828022' 00:29:42.363 killing process with pid 828022 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 828022 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 828022 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.363 08:03:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:44.898 00:29:44.898 real 0m5.743s 00:29:44.898 user 0m5.020s 00:29:44.898 sys 0m2.014s 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:44.898 ************************************ 00:29:44.898 END TEST nvmf_aer 00:29:44.898 ************************************ 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.898 ************************************ 00:29:44.898 START TEST nvmf_async_init 00:29:44.898 ************************************ 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:44.898 * Looking for test storage... 00:29:44.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:44.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.898 --rc genhtml_branch_coverage=1 00:29:44.898 --rc genhtml_function_coverage=1 00:29:44.898 --rc genhtml_legend=1 00:29:44.898 --rc geninfo_all_blocks=1 00:29:44.898 --rc geninfo_unexecuted_blocks=1 00:29:44.898 00:29:44.898 ' 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:44.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.898 --rc genhtml_branch_coverage=1 00:29:44.898 --rc genhtml_function_coverage=1 00:29:44.898 --rc genhtml_legend=1 00:29:44.898 --rc geninfo_all_blocks=1 00:29:44.898 --rc geninfo_unexecuted_blocks=1 00:29:44.898 00:29:44.898 ' 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:44.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.898 --rc genhtml_branch_coverage=1 00:29:44.898 --rc genhtml_function_coverage=1 00:29:44.898 --rc genhtml_legend=1 00:29:44.898 --rc geninfo_all_blocks=1 00:29:44.898 --rc geninfo_unexecuted_blocks=1 00:29:44.898 00:29:44.898 ' 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:44.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.898 --rc genhtml_branch_coverage=1 00:29:44.898 --rc genhtml_function_coverage=1 00:29:44.898 --rc genhtml_legend=1 00:29:44.898 --rc geninfo_all_blocks=1 00:29:44.898 --rc geninfo_unexecuted_blocks=1 00:29:44.898 00:29:44.898 ' 00:29:44.898 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:44.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8f76ef8ca0aa4f3a95cefb9d8174f2a6 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:44.899 08:03:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:46.818 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:46.818 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:46.819 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:46.819 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:46.819 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:46.819 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.078 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.078 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.078 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:47.078 08:03:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:47.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:29:47.078 00:29:47.078 --- 10.0.0.2 ping statistics --- 00:29:47.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.078 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:29:47.078 00:29:47.078 --- 10.0.0.1 ping statistics --- 00:29:47.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.078 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=830118 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 830118 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 830118 ']' 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:47.078 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.337 [2024-11-18 08:03:40.171661] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:47.337 [2024-11-18 08:03:40.171749] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.337 [2024-11-18 08:03:40.252122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.337 [2024-11-18 08:03:40.300884] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.337 [2024-11-18 08:03:40.300947] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.337 [2024-11-18 08:03:40.300963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.337 [2024-11-18 08:03:40.300975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.337 [2024-11-18 08:03:40.300997] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.337 [2024-11-18 08:03:40.301648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.337 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.337 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:47.337 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:47.337 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:47.337 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.596 [2024-11-18 08:03:40.443955] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.596 null0 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8f76ef8ca0aa4f3a95cefb9d8174f2a6 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.596 [2024-11-18 08:03:40.484224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.596 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.857 nvme0n1 00:29:47.857 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.857 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:47.857 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.857 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.857 [ 00:29:47.857 { 00:29:47.857 "name": "nvme0n1", 00:29:47.857 "aliases": [ 00:29:47.857 "8f76ef8c-a0aa-4f3a-95ce-fb9d8174f2a6" 00:29:47.857 ], 00:29:47.857 "product_name": "NVMe disk", 00:29:47.857 "block_size": 512, 00:29:47.857 "num_blocks": 2097152, 00:29:47.857 "uuid": "8f76ef8c-a0aa-4f3a-95ce-fb9d8174f2a6", 00:29:47.857 "numa_id": 0, 00:29:47.857 "assigned_rate_limits": { 00:29:47.857 "rw_ios_per_sec": 0, 00:29:47.857 "rw_mbytes_per_sec": 0, 00:29:47.857 "r_mbytes_per_sec": 0, 00:29:47.857 "w_mbytes_per_sec": 0 00:29:47.857 }, 00:29:47.857 "claimed": false, 00:29:47.857 "zoned": false, 00:29:47.857 "supported_io_types": { 00:29:47.857 "read": true, 00:29:47.857 "write": true, 00:29:47.857 "unmap": false, 00:29:47.857 "flush": true, 00:29:47.857 "reset": true, 00:29:47.857 "nvme_admin": true, 00:29:47.857 "nvme_io": true, 00:29:47.857 "nvme_io_md": false, 00:29:47.857 "write_zeroes": true, 00:29:47.857 "zcopy": false, 00:29:47.857 "get_zone_info": false, 00:29:47.857 "zone_management": false, 00:29:47.857 "zone_append": false, 00:29:47.857 "compare": true, 00:29:47.857 "compare_and_write": true, 00:29:47.857 "abort": true, 00:29:47.857 "seek_hole": false, 00:29:47.857 "seek_data": false, 00:29:47.857 "copy": true, 00:29:47.857 "nvme_iov_md": false 00:29:47.857 }, 00:29:47.857 "memory_domains": [ 00:29:47.857 { 00:29:47.857 "dma_device_id": "system", 00:29:47.857 "dma_device_type": 1 00:29:47.857 } 00:29:47.857 ], 00:29:47.857 "driver_specific": { 00:29:47.857 "nvme": [ 00:29:47.857 { 00:29:47.857 "trid": { 00:29:47.857 "trtype": "TCP", 00:29:47.857 "adrfam": "IPv4", 00:29:47.857 "traddr": "10.0.0.2", 00:29:47.857 "trsvcid": "4420", 00:29:47.857 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:47.857 }, 00:29:47.857 "ctrlr_data": { 00:29:47.857 "cntlid": 1, 00:29:47.857 "vendor_id": "0x8086", 00:29:47.857 "model_number": "SPDK bdev Controller", 00:29:47.857 "serial_number": "00000000000000000000", 00:29:47.857 "firmware_revision": "25.01", 00:29:47.857 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:47.857 "oacs": { 00:29:47.857 "security": 0, 00:29:47.857 "format": 0, 00:29:47.857 "firmware": 0, 00:29:47.857 "ns_manage": 0 00:29:47.857 }, 00:29:47.857 "multi_ctrlr": true, 00:29:47.857 "ana_reporting": false 00:29:47.857 }, 00:29:47.857 "vs": { 00:29:47.857 "nvme_version": "1.3" 00:29:47.857 }, 00:29:47.857 "ns_data": { 00:29:47.857 "id": 1, 00:29:47.857 "can_share": true 00:29:47.857 } 00:29:47.857 } 00:29:47.857 ], 00:29:47.857 "mp_policy": "active_passive" 00:29:47.857 } 00:29:47.857 } 00:29:47.857 ] 00:29:47.857 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.857 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:47.857 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.857 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.857 [2024-11-18 08:03:40.733378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:47.857 [2024-11-18 08:03:40.733452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7e4a0 (9): Bad file descriptor 00:29:47.857 [2024-11-18 08:03:40.865627] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:47.857 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.857 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:47.857 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.857 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.857 [ 00:29:47.857 { 00:29:47.857 "name": "nvme0n1", 00:29:47.857 "aliases": [ 00:29:47.857 "8f76ef8c-a0aa-4f3a-95ce-fb9d8174f2a6" 00:29:47.857 ], 00:29:47.857 "product_name": "NVMe disk", 00:29:47.857 "block_size": 512, 00:29:47.857 "num_blocks": 2097152, 00:29:47.857 "uuid": "8f76ef8c-a0aa-4f3a-95ce-fb9d8174f2a6", 00:29:47.857 "numa_id": 0, 00:29:47.857 "assigned_rate_limits": { 00:29:47.857 "rw_ios_per_sec": 0, 00:29:47.857 "rw_mbytes_per_sec": 0, 00:29:47.857 "r_mbytes_per_sec": 0, 00:29:47.857 "w_mbytes_per_sec": 0 00:29:47.858 }, 00:29:47.858 "claimed": false, 00:29:47.858 "zoned": false, 00:29:47.858 "supported_io_types": { 00:29:47.858 "read": true, 00:29:47.858 "write": true, 00:29:47.858 "unmap": false, 00:29:47.858 "flush": true, 00:29:47.858 "reset": true, 00:29:47.858 "nvme_admin": true, 00:29:47.858 "nvme_io": true, 00:29:47.858 "nvme_io_md": false, 00:29:47.858 "write_zeroes": true, 00:29:47.858 "zcopy": false, 00:29:47.858 "get_zone_info": false, 00:29:47.858 "zone_management": false, 00:29:47.858 "zone_append": false, 00:29:47.858 "compare": true, 00:29:47.858 "compare_and_write": true, 00:29:47.858 "abort": true, 00:29:47.858 "seek_hole": false, 00:29:47.858 "seek_data": false, 00:29:47.858 "copy": true, 00:29:47.858 "nvme_iov_md": false 00:29:47.858 }, 00:29:47.858 "memory_domains": [ 00:29:47.858 { 00:29:47.858 "dma_device_id": "system", 00:29:47.858 "dma_device_type": 1 00:29:47.858 } 00:29:47.858 ], 00:29:47.858 "driver_specific": { 00:29:47.858 "nvme": [ 00:29:47.858 { 00:29:47.858 "trid": { 00:29:47.858 "trtype": "TCP", 00:29:47.858 "adrfam": "IPv4", 00:29:47.858 "traddr": "10.0.0.2", 00:29:47.858 "trsvcid": "4420", 00:29:47.858 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:47.858 }, 00:29:47.858 "ctrlr_data": { 00:29:47.858 "cntlid": 2, 00:29:47.858 "vendor_id": "0x8086", 00:29:47.858 "model_number": "SPDK bdev Controller", 00:29:47.858 "serial_number": "00000000000000000000", 00:29:47.858 "firmware_revision": "25.01", 00:29:47.858 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:47.858 "oacs": { 00:29:47.858 "security": 0, 00:29:47.858 "format": 0, 00:29:47.858 "firmware": 0, 00:29:47.858 "ns_manage": 0 00:29:47.858 }, 00:29:47.858 "multi_ctrlr": true, 00:29:47.858 "ana_reporting": false 00:29:47.858 }, 00:29:47.858 "vs": { 00:29:47.858 "nvme_version": "1.3" 00:29:47.858 }, 00:29:47.858 "ns_data": { 00:29:47.858 "id": 1, 00:29:47.858 "can_share": true 00:29:47.858 } 00:29:47.858 } 00:29:47.858 ], 00:29:47.858 "mp_policy": "active_passive" 00:29:47.858 } 00:29:47.858 } 00:29:47.858 ] 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.2rmn1B7PVh 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.2rmn1B7PVh 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.2rmn1B7PVh 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.858 [2024-11-18 08:03:40.921972] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:47.858 [2024-11-18 08:03:40.922076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.858 08:03:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.858 [2024-11-18 08:03:40.938027] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:48.119 nvme0n1 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:48.119 [ 00:29:48.119 { 00:29:48.119 "name": "nvme0n1", 00:29:48.119 "aliases": [ 00:29:48.119 "8f76ef8c-a0aa-4f3a-95ce-fb9d8174f2a6" 00:29:48.119 ], 00:29:48.119 "product_name": "NVMe disk", 00:29:48.119 "block_size": 512, 00:29:48.119 "num_blocks": 2097152, 00:29:48.119 "uuid": "8f76ef8c-a0aa-4f3a-95ce-fb9d8174f2a6", 00:29:48.119 "numa_id": 0, 00:29:48.119 "assigned_rate_limits": { 00:29:48.119 "rw_ios_per_sec": 0, 00:29:48.119 "rw_mbytes_per_sec": 0, 00:29:48.119 "r_mbytes_per_sec": 0, 00:29:48.119 "w_mbytes_per_sec": 0 00:29:48.119 }, 00:29:48.119 "claimed": false, 00:29:48.119 "zoned": false, 00:29:48.119 "supported_io_types": { 00:29:48.119 "read": true, 00:29:48.119 "write": true, 00:29:48.119 "unmap": false, 00:29:48.119 "flush": true, 00:29:48.119 "reset": true, 00:29:48.119 "nvme_admin": true, 00:29:48.119 "nvme_io": true, 00:29:48.119 "nvme_io_md": false, 00:29:48.119 "write_zeroes": true, 00:29:48.119 "zcopy": false, 00:29:48.119 "get_zone_info": false, 00:29:48.119 "zone_management": false, 00:29:48.119 "zone_append": false, 00:29:48.119 "compare": true, 00:29:48.119 "compare_and_write": true, 00:29:48.119 "abort": true, 00:29:48.119 "seek_hole": false, 00:29:48.119 "seek_data": false, 00:29:48.119 "copy": true, 00:29:48.119 "nvme_iov_md": false 00:29:48.119 }, 00:29:48.119 "memory_domains": [ 00:29:48.119 { 00:29:48.119 "dma_device_id": "system", 00:29:48.119 "dma_device_type": 1 00:29:48.119 } 00:29:48.119 ], 00:29:48.119 "driver_specific": { 00:29:48.119 "nvme": [ 00:29:48.119 { 00:29:48.119 "trid": { 00:29:48.119 "trtype": "TCP", 00:29:48.119 "adrfam": "IPv4", 00:29:48.119 "traddr": "10.0.0.2", 00:29:48.119 "trsvcid": "4421", 00:29:48.119 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:48.119 }, 00:29:48.119 "ctrlr_data": { 00:29:48.119 "cntlid": 3, 00:29:48.119 "vendor_id": "0x8086", 00:29:48.119 "model_number": "SPDK bdev Controller", 00:29:48.119 "serial_number": "00000000000000000000", 00:29:48.119 "firmware_revision": "25.01", 00:29:48.119 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:48.119 "oacs": { 00:29:48.119 "security": 0, 00:29:48.119 "format": 0, 00:29:48.119 "firmware": 0, 00:29:48.119 "ns_manage": 0 00:29:48.119 }, 00:29:48.119 "multi_ctrlr": true, 00:29:48.119 "ana_reporting": false 00:29:48.119 }, 00:29:48.119 "vs": { 00:29:48.119 "nvme_version": "1.3" 00:29:48.119 }, 00:29:48.119 "ns_data": { 00:29:48.119 "id": 1, 00:29:48.119 "can_share": true 00:29:48.119 } 00:29:48.119 } 00:29:48.119 ], 00:29:48.119 "mp_policy": "active_passive" 00:29:48.119 } 00:29:48.119 } 00:29:48.119 ] 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.2rmn1B7PVh 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:48.119 rmmod nvme_tcp 00:29:48.119 rmmod nvme_fabrics 00:29:48.119 rmmod nvme_keyring 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 830118 ']' 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 830118 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 830118 ']' 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 830118 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 830118 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 830118' 00:29:48.119 killing process with pid 830118 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 830118 00:29:48.119 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 830118 00:29:48.378 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:48.378 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:48.378 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:48.378 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:48.378 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:29:48.378 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:48.378 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:29:48.378 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:48.378 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:48.378 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.378 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.379 08:03:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.289 08:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:50.289 00:29:50.289 real 0m5.848s 00:29:50.289 user 0m2.211s 00:29:50.289 sys 0m1.984s 00:29:50.289 08:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.289 08:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:50.289 ************************************ 00:29:50.289 END TEST nvmf_async_init 00:29:50.289 ************************************ 00:29:50.548 08:03:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:50.548 08:03:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:50.548 08:03:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.548 08:03:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.548 ************************************ 00:29:50.548 START TEST dma 00:29:50.548 ************************************ 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:50.549 * Looking for test storage... 00:29:50.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:50.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.549 --rc genhtml_branch_coverage=1 00:29:50.549 --rc genhtml_function_coverage=1 00:29:50.549 --rc genhtml_legend=1 00:29:50.549 --rc geninfo_all_blocks=1 00:29:50.549 --rc geninfo_unexecuted_blocks=1 00:29:50.549 00:29:50.549 ' 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:50.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.549 --rc genhtml_branch_coverage=1 00:29:50.549 --rc genhtml_function_coverage=1 00:29:50.549 --rc genhtml_legend=1 00:29:50.549 --rc geninfo_all_blocks=1 00:29:50.549 --rc geninfo_unexecuted_blocks=1 00:29:50.549 00:29:50.549 ' 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:50.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.549 --rc genhtml_branch_coverage=1 00:29:50.549 --rc genhtml_function_coverage=1 00:29:50.549 --rc genhtml_legend=1 00:29:50.549 --rc geninfo_all_blocks=1 00:29:50.549 --rc geninfo_unexecuted_blocks=1 00:29:50.549 00:29:50.549 ' 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:50.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.549 --rc genhtml_branch_coverage=1 00:29:50.549 --rc genhtml_function_coverage=1 00:29:50.549 --rc genhtml_legend=1 00:29:50.549 --rc geninfo_all_blocks=1 00:29:50.549 --rc geninfo_unexecuted_blocks=1 00:29:50.549 00:29:50.549 ' 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.549 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:50.550 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:50.550 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:50.550 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.550 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.550 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.550 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:50.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:50.550 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:50.550 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:50.550 08:03:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:50.550 08:03:43 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:50.550 08:03:43 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:50.550 00:29:50.550 real 0m0.150s 00:29:50.550 user 0m0.104s 00:29:50.550 sys 0m0.054s 00:29:50.550 08:03:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.550 08:03:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:50.550 ************************************ 00:29:50.550 END TEST dma 00:29:50.550 ************************************ 00:29:50.550 08:03:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:50.550 08:03:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:50.550 08:03:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.550 08:03:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.550 ************************************ 00:29:50.550 START TEST nvmf_identify 00:29:50.550 ************************************ 00:29:50.550 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:50.809 * Looking for test storage... 00:29:50.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.809 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:50.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.810 --rc genhtml_branch_coverage=1 00:29:50.810 --rc genhtml_function_coverage=1 00:29:50.810 --rc genhtml_legend=1 00:29:50.810 --rc geninfo_all_blocks=1 00:29:50.810 --rc geninfo_unexecuted_blocks=1 00:29:50.810 00:29:50.810 ' 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:50.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.810 --rc genhtml_branch_coverage=1 00:29:50.810 --rc genhtml_function_coverage=1 00:29:50.810 --rc genhtml_legend=1 00:29:50.810 --rc geninfo_all_blocks=1 00:29:50.810 --rc geninfo_unexecuted_blocks=1 00:29:50.810 00:29:50.810 ' 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:50.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.810 --rc genhtml_branch_coverage=1 00:29:50.810 --rc genhtml_function_coverage=1 00:29:50.810 --rc genhtml_legend=1 00:29:50.810 --rc geninfo_all_blocks=1 00:29:50.810 --rc geninfo_unexecuted_blocks=1 00:29:50.810 00:29:50.810 ' 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:50.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.810 --rc genhtml_branch_coverage=1 00:29:50.810 --rc genhtml_function_coverage=1 00:29:50.810 --rc genhtml_legend=1 00:29:50.810 --rc geninfo_all_blocks=1 00:29:50.810 --rc geninfo_unexecuted_blocks=1 00:29:50.810 00:29:50.810 ' 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:50.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:50.810 08:03:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:53.348 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:53.348 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:53.348 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.348 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:53.349 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:53.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:29:53.349 00:29:53.349 --- 10.0.0.2 ping statistics --- 00:29:53.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.349 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:29:53.349 08:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:53.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:29:53.349 00:29:53.349 --- 10.0.0.1 ping statistics --- 00:29:53.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.349 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=832376 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 832376 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 832376 ']' 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:53.349 [2024-11-18 08:03:46.083014] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:53.349 [2024-11-18 08:03:46.083087] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.349 [2024-11-18 08:03:46.157950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:53.349 [2024-11-18 08:03:46.209102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.349 [2024-11-18 08:03:46.209166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.349 [2024-11-18 08:03:46.209195] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.349 [2024-11-18 08:03:46.209206] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.349 [2024-11-18 08:03:46.209216] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.349 [2024-11-18 08:03:46.210742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.349 [2024-11-18 08:03:46.210771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:53.349 [2024-11-18 08:03:46.210829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:53.349 [2024-11-18 08:03:46.210832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:53.349 [2024-11-18 08:03:46.336590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:53.349 Malloc0 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:53.349 [2024-11-18 08:03:46.429733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.349 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:53.350 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.350 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:53.612 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.612 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:53.612 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.612 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:53.612 [ 00:29:53.612 { 00:29:53.612 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:53.612 "subtype": "Discovery", 00:29:53.612 "listen_addresses": [ 00:29:53.612 { 00:29:53.612 "trtype": "TCP", 00:29:53.612 "adrfam": "IPv4", 00:29:53.612 "traddr": "10.0.0.2", 00:29:53.612 "trsvcid": "4420" 00:29:53.612 } 00:29:53.612 ], 00:29:53.612 "allow_any_host": true, 00:29:53.612 "hosts": [] 00:29:53.612 }, 00:29:53.612 { 00:29:53.612 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:53.612 "subtype": "NVMe", 00:29:53.612 "listen_addresses": [ 00:29:53.612 { 00:29:53.612 "trtype": "TCP", 00:29:53.612 "adrfam": "IPv4", 00:29:53.612 "traddr": "10.0.0.2", 00:29:53.612 "trsvcid": "4420" 00:29:53.612 } 00:29:53.612 ], 00:29:53.612 "allow_any_host": true, 00:29:53.612 "hosts": [], 00:29:53.612 "serial_number": "SPDK00000000000001", 00:29:53.612 "model_number": "SPDK bdev Controller", 00:29:53.612 "max_namespaces": 32, 00:29:53.612 "min_cntlid": 1, 00:29:53.612 "max_cntlid": 65519, 00:29:53.612 "namespaces": [ 00:29:53.612 { 00:29:53.612 "nsid": 1, 00:29:53.612 "bdev_name": "Malloc0", 00:29:53.612 "name": "Malloc0", 00:29:53.612 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:53.612 "eui64": "ABCDEF0123456789", 00:29:53.612 "uuid": "f730b0b7-609d-4321-b738-15b0076e0b6d" 00:29:53.612 } 00:29:53.612 ] 00:29:53.612 } 00:29:53.612 ] 00:29:53.612 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.612 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:53.612 [2024-11-18 08:03:46.471232] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:53.612 [2024-11-18 08:03:46.471286] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid832401 ] 00:29:53.612 [2024-11-18 08:03:46.520986] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:53.612 [2024-11-18 08:03:46.521063] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:53.612 [2024-11-18 08:03:46.521074] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:53.612 [2024-11-18 08:03:46.521091] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:53.612 [2024-11-18 08:03:46.521107] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:53.612 [2024-11-18 08:03:46.528967] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:53.612 [2024-11-18 08:03:46.529035] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa16d80 0 00:29:53.612 [2024-11-18 08:03:46.529160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:53.612 [2024-11-18 08:03:46.529177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:53.612 [2024-11-18 08:03:46.529186] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:53.612 [2024-11-18 08:03:46.529193] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:53.612 [2024-11-18 08:03:46.529241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.612 [2024-11-18 08:03:46.529256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.612 [2024-11-18 08:03:46.529264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa16d80) 00:29:53.612 [2024-11-18 08:03:46.529285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:53.612 [2024-11-18 08:03:46.529310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82480, cid 0, qid 0 00:29:53.612 [2024-11-18 08:03:46.536522] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.612 [2024-11-18 08:03:46.536540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.612 [2024-11-18 08:03:46.536547] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.612 [2024-11-18 08:03:46.536556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82480) on tqpair=0xa16d80 00:29:53.612 [2024-11-18 08:03:46.536593] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:53.612 [2024-11-18 08:03:46.536607] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:53.612 [2024-11-18 08:03:46.536618] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:53.612 [2024-11-18 08:03:46.536644] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.612 [2024-11-18 08:03:46.536652] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.612 [2024-11-18 08:03:46.536659] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa16d80) 00:29:53.612 [2024-11-18 08:03:46.536671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.612 [2024-11-18 08:03:46.536696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82480, cid 0, qid 0 00:29:53.612 [2024-11-18 08:03:46.536793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.612 [2024-11-18 08:03:46.536808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.612 [2024-11-18 08:03:46.536815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.612 [2024-11-18 08:03:46.536822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82480) on tqpair=0xa16d80 00:29:53.612 [2024-11-18 08:03:46.536832] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:53.612 [2024-11-18 08:03:46.536852] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:53.612 [2024-11-18 08:03:46.536866] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.612 [2024-11-18 08:03:46.536873] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.612 [2024-11-18 08:03:46.536880] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa16d80) 00:29:53.612 [2024-11-18 08:03:46.536890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.612 [2024-11-18 08:03:46.536912] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82480, cid 0, qid 0 00:29:53.612 [2024-11-18 08:03:46.536983] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.612 [2024-11-18 08:03:46.536995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.612 [2024-11-18 08:03:46.537002] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.612 [2024-11-18 08:03:46.537009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82480) on tqpair=0xa16d80 00:29:53.612 [2024-11-18 08:03:46.537019] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:53.612 [2024-11-18 08:03:46.537033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:53.612 [2024-11-18 08:03:46.537045] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.612 [2024-11-18 08:03:46.537052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.612 [2024-11-18 08:03:46.537059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa16d80) 00:29:53.612 [2024-11-18 08:03:46.537069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.612 [2024-11-18 08:03:46.537089] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82480, cid 0, qid 0 00:29:53.612 [2024-11-18 08:03:46.537160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.612 [2024-11-18 08:03:46.537173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.613 [2024-11-18 08:03:46.537179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.537186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82480) on tqpair=0xa16d80 00:29:53.613 [2024-11-18 08:03:46.537195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:53.613 [2024-11-18 08:03:46.537212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.537221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.537227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa16d80) 00:29:53.613 [2024-11-18 08:03:46.537237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.613 [2024-11-18 08:03:46.537258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82480, cid 0, qid 0 00:29:53.613 [2024-11-18 08:03:46.537337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.613 [2024-11-18 08:03:46.537351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.613 [2024-11-18 08:03:46.537358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.537365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82480) on tqpair=0xa16d80 00:29:53.613 [2024-11-18 08:03:46.537375] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:53.613 [2024-11-18 08:03:46.537384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:53.613 [2024-11-18 08:03:46.537401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:53.613 [2024-11-18 08:03:46.537514] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:53.613 [2024-11-18 08:03:46.537525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:53.613 [2024-11-18 08:03:46.537543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.537550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.537557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa16d80) 00:29:53.613 [2024-11-18 08:03:46.537567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.613 [2024-11-18 08:03:46.537589] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82480, cid 0, qid 0 00:29:53.613 [2024-11-18 08:03:46.537674] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.613 [2024-11-18 08:03:46.537686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.613 [2024-11-18 08:03:46.537693] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.537700] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82480) on tqpair=0xa16d80 00:29:53.613 [2024-11-18 08:03:46.537710] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:53.613 [2024-11-18 08:03:46.537727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.537735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.537742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa16d80) 00:29:53.613 [2024-11-18 08:03:46.537752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.613 [2024-11-18 08:03:46.537773] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82480, cid 0, qid 0 00:29:53.613 [2024-11-18 08:03:46.537849] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.613 [2024-11-18 08:03:46.537863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.613 [2024-11-18 08:03:46.537870] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.537877] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82480) on tqpair=0xa16d80 00:29:53.613 [2024-11-18 08:03:46.537884] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:53.613 [2024-11-18 08:03:46.537893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:53.613 [2024-11-18 08:03:46.537908] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:53.613 [2024-11-18 08:03:46.537925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:53.613 [2024-11-18 08:03:46.537944] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.537952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa16d80) 00:29:53.613 [2024-11-18 08:03:46.537962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.613 [2024-11-18 08:03:46.537983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82480, cid 0, qid 0 00:29:53.613 [2024-11-18 08:03:46.538104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:53.613 [2024-11-18 08:03:46.538123] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:53.613 [2024-11-18 08:03:46.538131] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.538138] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa16d80): datao=0, datal=4096, cccid=0 00:29:53.613 [2024-11-18 08:03:46.538147] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa82480) on tqpair(0xa16d80): expected_datao=0, payload_size=4096 00:29:53.613 [2024-11-18 08:03:46.538154] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.538166] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.538176] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.538189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.613 [2024-11-18 08:03:46.538199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.613 [2024-11-18 08:03:46.538206] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.538212] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82480) on tqpair=0xa16d80 00:29:53.613 [2024-11-18 08:03:46.538226] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:53.613 [2024-11-18 08:03:46.538235] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:53.613 [2024-11-18 08:03:46.538242] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:53.613 [2024-11-18 08:03:46.538257] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:53.613 [2024-11-18 08:03:46.538268] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:53.613 [2024-11-18 08:03:46.538276] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:53.613 [2024-11-18 08:03:46.538295] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:53.613 [2024-11-18 08:03:46.538309] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.538317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.538323] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa16d80) 00:29:53.613 [2024-11-18 08:03:46.538334] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:53.613 [2024-11-18 08:03:46.538355] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82480, cid 0, qid 0 00:29:53.613 [2024-11-18 08:03:46.538440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.613 [2024-11-18 08:03:46.538454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.613 [2024-11-18 08:03:46.538461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.538467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82480) on tqpair=0xa16d80 00:29:53.613 [2024-11-18 08:03:46.538481] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.538488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.538504] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa16d80) 00:29:53.613 [2024-11-18 08:03:46.538515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.613 [2024-11-18 08:03:46.538525] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.538532] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.538538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa16d80) 00:29:53.613 [2024-11-18 08:03:46.538551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.613 [2024-11-18 08:03:46.538561] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.538568] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.538574] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa16d80) 00:29:53.613 [2024-11-18 08:03:46.538583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.613 [2024-11-18 08:03:46.538592] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.538599] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.538605] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa16d80) 00:29:53.613 [2024-11-18 08:03:46.538613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.613 [2024-11-18 08:03:46.538622] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:53.613 [2024-11-18 08:03:46.538637] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:53.613 [2024-11-18 08:03:46.538650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.613 [2024-11-18 08:03:46.538657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa16d80) 00:29:53.613 [2024-11-18 08:03:46.538666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.614 [2024-11-18 08:03:46.538688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82480, cid 0, qid 0 00:29:53.614 [2024-11-18 08:03:46.538700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82600, cid 1, qid 0 00:29:53.614 [2024-11-18 08:03:46.538707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82780, cid 2, qid 0 00:29:53.614 [2024-11-18 08:03:46.538715] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82900, cid 3, qid 0 00:29:53.614 [2024-11-18 08:03:46.538722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82a80, cid 4, qid 0 00:29:53.614 [2024-11-18 08:03:46.538831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.614 [2024-11-18 08:03:46.538845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.614 [2024-11-18 08:03:46.538852] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.614 [2024-11-18 08:03:46.538858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82a80) on tqpair=0xa16d80 00:29:53.614 [2024-11-18 08:03:46.538874] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:53.614 [2024-11-18 08:03:46.538884] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:53.614 [2024-11-18 08:03:46.538902] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.614 [2024-11-18 08:03:46.538911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa16d80) 00:29:53.614 [2024-11-18 08:03:46.538922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.614 [2024-11-18 08:03:46.538942] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82a80, cid 4, qid 0 00:29:53.614 [2024-11-18 08:03:46.539041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:53.614 [2024-11-18 08:03:46.539053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:53.614 [2024-11-18 08:03:46.539060] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:53.614 [2024-11-18 08:03:46.539066] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa16d80): datao=0, datal=4096, cccid=4 00:29:53.614 [2024-11-18 08:03:46.539078] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa82a80) on tqpair(0xa16d80): expected_datao=0, payload_size=4096 00:29:53.614 [2024-11-18 08:03:46.539086] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.614 [2024-11-18 08:03:46.539096] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:53.614 [2024-11-18 08:03:46.539104] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:53.614 [2024-11-18 08:03:46.539116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.614 [2024-11-18 08:03:46.539126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.614 [2024-11-18 08:03:46.539132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.614 [2024-11-18 08:03:46.539139] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82a80) on tqpair=0xa16d80 00:29:53.614 [2024-11-18 08:03:46.539160] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:53.614 [2024-11-18 08:03:46.539202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.614 [2024-11-18 08:03:46.539213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa16d80) 00:29:53.614 [2024-11-18 08:03:46.539223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.614 [2024-11-18 08:03:46.539236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.614 [2024-11-18 08:03:46.539243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.614 [2024-11-18 08:03:46.539249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa16d80) 00:29:53.614 [2024-11-18 08:03:46.539258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.614 [2024-11-18 08:03:46.539285] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82a80, cid 4, qid 0 00:29:53.614 [2024-11-18 08:03:46.539297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82c00, cid 5, qid 0 00:29:53.614 [2024-11-18 08:03:46.539418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:53.614 [2024-11-18 08:03:46.539431] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:53.614 [2024-11-18 08:03:46.539438] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:53.614 [2024-11-18 08:03:46.539444] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa16d80): datao=0, datal=1024, cccid=4 00:29:53.614 [2024-11-18 08:03:46.539452] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa82a80) on tqpair(0xa16d80): expected_datao=0, payload_size=1024 00:29:53.614 [2024-11-18 08:03:46.539459] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.614 [2024-11-18 08:03:46.539468] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:53.614 [2024-11-18 08:03:46.539476] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:53.614 [2024-11-18 08:03:46.539485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.614 [2024-11-18 08:03:46.539502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.614 [2024-11-18 08:03:46.539510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.614 [2024-11-18 08:03:46.539517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82c00) on tqpair=0xa16d80 00:29:53.614 [2024-11-18 08:03:46.579572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.614 [2024-11-18 08:03:46.579592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.614 [2024-11-18 08:03:46.579600] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.614 [2024-11-18 08:03:46.579607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82a80) on tqpair=0xa16d80 00:29:53.614 [2024-11-18 08:03:46.579628] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.614 [2024-11-18 08:03:46.579637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa16d80) 00:29:53.614 [2024-11-18 08:03:46.579648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.614 [2024-11-18 08:03:46.579683] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82a80, cid 4, qid 0 00:29:53.614 [2024-11-18 08:03:46.579791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:53.614 [2024-11-18 08:03:46.579806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:53.614 [2024-11-18 08:03:46.579813] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:53.615 [2024-11-18 08:03:46.579819] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa16d80): datao=0, datal=3072, cccid=4 00:29:53.615 [2024-11-18 08:03:46.579827] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa82a80) on tqpair(0xa16d80): expected_datao=0, payload_size=3072 00:29:53.615 [2024-11-18 08:03:46.579834] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.615 [2024-11-18 08:03:46.579844] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:53.615 [2024-11-18 08:03:46.579852] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:53.615 [2024-11-18 08:03:46.579864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.615 [2024-11-18 08:03:46.579874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.615 [2024-11-18 08:03:46.579880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.615 [2024-11-18 08:03:46.579887] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82a80) on tqpair=0xa16d80 00:29:53.615 [2024-11-18 08:03:46.579903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.615 [2024-11-18 08:03:46.579911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa16d80) 00:29:53.615 [2024-11-18 08:03:46.579922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.615 [2024-11-18 08:03:46.579950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82a80, cid 4, qid 0 00:29:53.615 [2024-11-18 08:03:46.580046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:53.615 [2024-11-18 08:03:46.580060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:53.615 [2024-11-18 08:03:46.580066] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:53.615 [2024-11-18 08:03:46.580073] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa16d80): datao=0, datal=8, cccid=4 00:29:53.615 [2024-11-18 08:03:46.580080] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa82a80) on tqpair(0xa16d80): expected_datao=0, payload_size=8 00:29:53.615 [2024-11-18 08:03:46.580088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.615 [2024-11-18 08:03:46.580097] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:53.615 [2024-11-18 08:03:46.580105] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:53.615 [2024-11-18 08:03:46.620556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.615 [2024-11-18 08:03:46.620574] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.615 [2024-11-18 08:03:46.620582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.615 [2024-11-18 08:03:46.620589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82a80) on tqpair=0xa16d80 00:29:53.615 ===================================================== 00:29:53.615 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:53.615 ===================================================== 00:29:53.615 Controller Capabilities/Features 00:29:53.615 ================================ 00:29:53.615 Vendor ID: 0000 00:29:53.615 Subsystem Vendor ID: 0000 00:29:53.615 Serial Number: .................... 00:29:53.615 Model Number: ........................................ 00:29:53.615 Firmware Version: 25.01 00:29:53.615 Recommended Arb Burst: 0 00:29:53.615 IEEE OUI Identifier: 00 00 00 00:29:53.615 Multi-path I/O 00:29:53.615 May have multiple subsystem ports: No 00:29:53.615 May have multiple controllers: No 00:29:53.615 Associated with SR-IOV VF: No 00:29:53.615 Max Data Transfer Size: 131072 00:29:53.615 Max Number of Namespaces: 0 00:29:53.615 Max Number of I/O Queues: 1024 00:29:53.615 NVMe Specification Version (VS): 1.3 00:29:53.615 NVMe Specification Version (Identify): 1.3 00:29:53.615 Maximum Queue Entries: 128 00:29:53.615 Contiguous Queues Required: Yes 00:29:53.615 Arbitration Mechanisms Supported 00:29:53.615 Weighted Round Robin: Not Supported 00:29:53.615 Vendor Specific: Not Supported 00:29:53.615 Reset Timeout: 15000 ms 00:29:53.615 Doorbell Stride: 4 bytes 00:29:53.615 NVM Subsystem Reset: Not Supported 00:29:53.615 Command Sets Supported 00:29:53.615 NVM Command Set: Supported 00:29:53.615 Boot Partition: Not Supported 00:29:53.615 Memory Page Size Minimum: 4096 bytes 00:29:53.615 Memory Page Size Maximum: 4096 bytes 00:29:53.615 Persistent Memory Region: Not Supported 00:29:53.615 Optional Asynchronous Events Supported 00:29:53.615 Namespace Attribute Notices: Not Supported 00:29:53.615 Firmware Activation Notices: Not Supported 00:29:53.615 ANA Change Notices: Not Supported 00:29:53.615 PLE Aggregate Log Change Notices: Not Supported 00:29:53.615 LBA Status Info Alert Notices: Not Supported 00:29:53.615 EGE Aggregate Log Change Notices: Not Supported 00:29:53.615 Normal NVM Subsystem Shutdown event: Not Supported 00:29:53.615 Zone Descriptor Change Notices: Not Supported 00:29:53.615 Discovery Log Change Notices: Supported 00:29:53.615 Controller Attributes 00:29:53.615 128-bit Host Identifier: Not Supported 00:29:53.615 Non-Operational Permissive Mode: Not Supported 00:29:53.615 NVM Sets: Not Supported 00:29:53.615 Read Recovery Levels: Not Supported 00:29:53.615 Endurance Groups: Not Supported 00:29:53.615 Predictable Latency Mode: Not Supported 00:29:53.615 Traffic Based Keep ALive: Not Supported 00:29:53.615 Namespace Granularity: Not Supported 00:29:53.615 SQ Associations: Not Supported 00:29:53.615 UUID List: Not Supported 00:29:53.615 Multi-Domain Subsystem: Not Supported 00:29:53.615 Fixed Capacity Management: Not Supported 00:29:53.615 Variable Capacity Management: Not Supported 00:29:53.615 Delete Endurance Group: Not Supported 00:29:53.615 Delete NVM Set: Not Supported 00:29:53.615 Extended LBA Formats Supported: Not Supported 00:29:53.615 Flexible Data Placement Supported: Not Supported 00:29:53.615 00:29:53.615 Controller Memory Buffer Support 00:29:53.615 ================================ 00:29:53.615 Supported: No 00:29:53.615 00:29:53.615 Persistent Memory Region Support 00:29:53.615 ================================ 00:29:53.615 Supported: No 00:29:53.615 00:29:53.615 Admin Command Set Attributes 00:29:53.615 ============================ 00:29:53.615 Security Send/Receive: Not Supported 00:29:53.615 Format NVM: Not Supported 00:29:53.615 Firmware Activate/Download: Not Supported 00:29:53.615 Namespace Management: Not Supported 00:29:53.615 Device Self-Test: Not Supported 00:29:53.615 Directives: Not Supported 00:29:53.615 NVMe-MI: Not Supported 00:29:53.615 Virtualization Management: Not Supported 00:29:53.615 Doorbell Buffer Config: Not Supported 00:29:53.615 Get LBA Status Capability: Not Supported 00:29:53.615 Command & Feature Lockdown Capability: Not Supported 00:29:53.615 Abort Command Limit: 1 00:29:53.615 Async Event Request Limit: 4 00:29:53.615 Number of Firmware Slots: N/A 00:29:53.615 Firmware Slot 1 Read-Only: N/A 00:29:53.615 Firmware Activation Without Reset: N/A 00:29:53.615 Multiple Update Detection Support: N/A 00:29:53.615 Firmware Update Granularity: No Information Provided 00:29:53.615 Per-Namespace SMART Log: No 00:29:53.615 Asymmetric Namespace Access Log Page: Not Supported 00:29:53.615 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:53.615 Command Effects Log Page: Not Supported 00:29:53.615 Get Log Page Extended Data: Supported 00:29:53.615 Telemetry Log Pages: Not Supported 00:29:53.615 Persistent Event Log Pages: Not Supported 00:29:53.615 Supported Log Pages Log Page: May Support 00:29:53.615 Commands Supported & Effects Log Page: Not Supported 00:29:53.615 Feature Identifiers & Effects Log Page:May Support 00:29:53.615 NVMe-MI Commands & Effects Log Page: May Support 00:29:53.615 Data Area 4 for Telemetry Log: Not Supported 00:29:53.615 Error Log Page Entries Supported: 128 00:29:53.615 Keep Alive: Not Supported 00:29:53.615 00:29:53.615 NVM Command Set Attributes 00:29:53.615 ========================== 00:29:53.615 Submission Queue Entry Size 00:29:53.615 Max: 1 00:29:53.615 Min: 1 00:29:53.615 Completion Queue Entry Size 00:29:53.615 Max: 1 00:29:53.615 Min: 1 00:29:53.615 Number of Namespaces: 0 00:29:53.615 Compare Command: Not Supported 00:29:53.615 Write Uncorrectable Command: Not Supported 00:29:53.615 Dataset Management Command: Not Supported 00:29:53.615 Write Zeroes Command: Not Supported 00:29:53.615 Set Features Save Field: Not Supported 00:29:53.615 Reservations: Not Supported 00:29:53.615 Timestamp: Not Supported 00:29:53.615 Copy: Not Supported 00:29:53.615 Volatile Write Cache: Not Present 00:29:53.615 Atomic Write Unit (Normal): 1 00:29:53.615 Atomic Write Unit (PFail): 1 00:29:53.615 Atomic Compare & Write Unit: 1 00:29:53.615 Fused Compare & Write: Supported 00:29:53.615 Scatter-Gather List 00:29:53.615 SGL Command Set: Supported 00:29:53.615 SGL Keyed: Supported 00:29:53.615 SGL Bit Bucket Descriptor: Not Supported 00:29:53.615 SGL Metadata Pointer: Not Supported 00:29:53.615 Oversized SGL: Not Supported 00:29:53.615 SGL Metadata Address: Not Supported 00:29:53.615 SGL Offset: Supported 00:29:53.615 Transport SGL Data Block: Not Supported 00:29:53.615 Replay Protected Memory Block: Not Supported 00:29:53.615 00:29:53.616 Firmware Slot Information 00:29:53.616 ========================= 00:29:53.616 Active slot: 0 00:29:53.616 00:29:53.616 00:29:53.616 Error Log 00:29:53.616 ========= 00:29:53.616 00:29:53.616 Active Namespaces 00:29:53.616 ================= 00:29:53.616 Discovery Log Page 00:29:53.616 ================== 00:29:53.616 Generation Counter: 2 00:29:53.616 Number of Records: 2 00:29:53.616 Record Format: 0 00:29:53.616 00:29:53.616 Discovery Log Entry 0 00:29:53.616 ---------------------- 00:29:53.616 Transport Type: 3 (TCP) 00:29:53.616 Address Family: 1 (IPv4) 00:29:53.616 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:53.616 Entry Flags: 00:29:53.616 Duplicate Returned Information: 1 00:29:53.616 Explicit Persistent Connection Support for Discovery: 1 00:29:53.616 Transport Requirements: 00:29:53.616 Secure Channel: Not Required 00:29:53.616 Port ID: 0 (0x0000) 00:29:53.616 Controller ID: 65535 (0xffff) 00:29:53.616 Admin Max SQ Size: 128 00:29:53.616 Transport Service Identifier: 4420 00:29:53.616 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:53.616 Transport Address: 10.0.0.2 00:29:53.616 Discovery Log Entry 1 00:29:53.616 ---------------------- 00:29:53.616 Transport Type: 3 (TCP) 00:29:53.616 Address Family: 1 (IPv4) 00:29:53.616 Subsystem Type: 2 (NVM Subsystem) 00:29:53.616 Entry Flags: 00:29:53.616 Duplicate Returned Information: 0 00:29:53.616 Explicit Persistent Connection Support for Discovery: 0 00:29:53.616 Transport Requirements: 00:29:53.616 Secure Channel: Not Required 00:29:53.616 Port ID: 0 (0x0000) 00:29:53.616 Controller ID: 65535 (0xffff) 00:29:53.616 Admin Max SQ Size: 128 00:29:53.616 Transport Service Identifier: 4420 00:29:53.616 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:53.616 Transport Address: 10.0.0.2 [2024-11-18 08:03:46.620711] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:53.616 [2024-11-18 08:03:46.620735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82480) on tqpair=0xa16d80 00:29:53.616 [2024-11-18 08:03:46.620750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.616 [2024-11-18 08:03:46.620759] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82600) on tqpair=0xa16d80 00:29:53.616 [2024-11-18 08:03:46.620767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.616 [2024-11-18 08:03:46.620775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82780) on tqpair=0xa16d80 00:29:53.616 [2024-11-18 08:03:46.620786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.616 [2024-11-18 08:03:46.620795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82900) on tqpair=0xa16d80 00:29:53.616 [2024-11-18 08:03:46.620803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.616 [2024-11-18 08:03:46.620821] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.616 [2024-11-18 08:03:46.620831] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.616 [2024-11-18 08:03:46.620837] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa16d80) 00:29:53.616 [2024-11-18 08:03:46.620849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.616 [2024-11-18 08:03:46.620874] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82900, cid 3, qid 0 00:29:53.616 [2024-11-18 08:03:46.620951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.616 [2024-11-18 08:03:46.620965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.616 [2024-11-18 08:03:46.620973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.616 [2024-11-18 08:03:46.620979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82900) on tqpair=0xa16d80 00:29:53.616 [2024-11-18 08:03:46.620993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.616 [2024-11-18 08:03:46.621001] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.616 [2024-11-18 08:03:46.621008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa16d80) 00:29:53.616 [2024-11-18 08:03:46.621018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.616 [2024-11-18 08:03:46.621044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82900, cid 3, qid 0 00:29:53.616 [2024-11-18 08:03:46.621136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.616 [2024-11-18 08:03:46.621150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.616 [2024-11-18 08:03:46.621157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.616 [2024-11-18 08:03:46.621164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82900) on tqpair=0xa16d80 00:29:53.616 [2024-11-18 08:03:46.621173] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:53.616 [2024-11-18 08:03:46.621181] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:53.616 [2024-11-18 08:03:46.621197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.616 [2024-11-18 08:03:46.621206] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.616 [2024-11-18 08:03:46.621212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa16d80) 00:29:53.616 [2024-11-18 08:03:46.621223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.616 [2024-11-18 08:03:46.621243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82900, cid 3, qid 0 00:29:53.616 [2024-11-18 08:03:46.621325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.616 [2024-11-18 08:03:46.621339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.616 [2024-11-18 08:03:46.621346] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.616 [2024-11-18 08:03:46.621352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82900) on tqpair=0xa16d80 00:29:53.616 [2024-11-18 08:03:46.621371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.616 [2024-11-18 08:03:46.621380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.616 [2024-11-18 08:03:46.621387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa16d80) 00:29:53.616 [2024-11-18 08:03:46.621401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.616 [2024-11-18 08:03:46.621422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82900, cid 3, qid 0 00:29:53.616 [2024-11-18 08:03:46.625505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.616 [2024-11-18 08:03:46.625521] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.616 [2024-11-18 08:03:46.625528] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.616 [2024-11-18 08:03:46.625534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82900) on tqpair=0xa16d80 00:29:53.616 [2024-11-18 08:03:46.625551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.616 [2024-11-18 08:03:46.625576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.616 [2024-11-18 08:03:46.625583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa16d80) 00:29:53.616 [2024-11-18 08:03:46.625594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.616 [2024-11-18 08:03:46.625617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa82900, cid 3, qid 0 00:29:53.616 [2024-11-18 08:03:46.625698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.616 [2024-11-18 08:03:46.625711] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.616 [2024-11-18 08:03:46.625718] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.616 [2024-11-18 08:03:46.625724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa82900) on tqpair=0xa16d80 00:29:53.616 [2024-11-18 08:03:46.625737] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:29:53.616 00:29:53.616 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:53.616 [2024-11-18 08:03:46.660602] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:53.616 [2024-11-18 08:03:46.660646] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid832405 ] 00:29:53.881 [2024-11-18 08:03:46.708452] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:53.881 [2024-11-18 08:03:46.708534] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:53.881 [2024-11-18 08:03:46.708547] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:53.881 [2024-11-18 08:03:46.708563] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:53.881 [2024-11-18 08:03:46.708577] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:53.881 [2024-11-18 08:03:46.712751] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:53.881 [2024-11-18 08:03:46.712806] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1203d80 0 00:29:53.881 [2024-11-18 08:03:46.719504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:53.881 [2024-11-18 08:03:46.719535] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:53.881 [2024-11-18 08:03:46.719543] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:53.881 [2024-11-18 08:03:46.719549] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:53.881 [2024-11-18 08:03:46.719580] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.881 [2024-11-18 08:03:46.719604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.881 [2024-11-18 08:03:46.719612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1203d80) 00:29:53.881 [2024-11-18 08:03:46.719627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:53.881 [2024-11-18 08:03:46.719654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f480, cid 0, qid 0 00:29:53.881 [2024-11-18 08:03:46.727507] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.881 [2024-11-18 08:03:46.727526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.881 [2024-11-18 08:03:46.727534] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.881 [2024-11-18 08:03:46.727541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f480) on tqpair=0x1203d80 00:29:53.881 [2024-11-18 08:03:46.727563] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:53.881 [2024-11-18 08:03:46.727575] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:53.881 [2024-11-18 08:03:46.727584] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:53.881 [2024-11-18 08:03:46.727603] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.881 [2024-11-18 08:03:46.727612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.881 [2024-11-18 08:03:46.727619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1203d80) 00:29:53.881 [2024-11-18 08:03:46.727630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.881 [2024-11-18 08:03:46.727654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f480, cid 0, qid 0 00:29:53.881 [2024-11-18 08:03:46.727771] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.881 [2024-11-18 08:03:46.727785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.881 [2024-11-18 08:03:46.727792] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.881 [2024-11-18 08:03:46.727799] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f480) on tqpair=0x1203d80 00:29:53.881 [2024-11-18 08:03:46.727807] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:53.881 [2024-11-18 08:03:46.727821] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:53.881 [2024-11-18 08:03:46.727834] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.881 [2024-11-18 08:03:46.727842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.881 [2024-11-18 08:03:46.727849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1203d80) 00:29:53.881 [2024-11-18 08:03:46.727859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.881 [2024-11-18 08:03:46.727881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f480, cid 0, qid 0 00:29:53.881 [2024-11-18 08:03:46.727954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.881 [2024-11-18 08:03:46.727967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.881 [2024-11-18 08:03:46.727974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.881 [2024-11-18 08:03:46.727980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f480) on tqpair=0x1203d80 00:29:53.881 [2024-11-18 08:03:46.727989] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:53.881 [2024-11-18 08:03:46.728003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:53.881 [2024-11-18 08:03:46.728015] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.881 [2024-11-18 08:03:46.728023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.881 [2024-11-18 08:03:46.728033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1203d80) 00:29:53.881 [2024-11-18 08:03:46.728044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.881 [2024-11-18 08:03:46.728066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f480, cid 0, qid 0 00:29:53.881 [2024-11-18 08:03:46.728139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.881 [2024-11-18 08:03:46.728151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.881 [2024-11-18 08:03:46.728158] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.881 [2024-11-18 08:03:46.728165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f480) on tqpair=0x1203d80 00:29:53.881 [2024-11-18 08:03:46.728173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:53.881 [2024-11-18 08:03:46.728190] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.881 [2024-11-18 08:03:46.728199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.881 [2024-11-18 08:03:46.728206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1203d80) 00:29:53.881 [2024-11-18 08:03:46.728216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.881 [2024-11-18 08:03:46.728237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f480, cid 0, qid 0 00:29:53.881 [2024-11-18 08:03:46.728313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.881 [2024-11-18 08:03:46.728327] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.881 [2024-11-18 08:03:46.728333] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.881 [2024-11-18 08:03:46.728340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f480) on tqpair=0x1203d80 00:29:53.881 [2024-11-18 08:03:46.728347] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:53.881 [2024-11-18 08:03:46.728356] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:53.881 [2024-11-18 08:03:46.728370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:53.881 [2024-11-18 08:03:46.728480] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:53.881 [2024-11-18 08:03:46.728488] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:53.881 [2024-11-18 08:03:46.728518] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.881 [2024-11-18 08:03:46.728526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.881 [2024-11-18 08:03:46.728533] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1203d80) 00:29:53.881 [2024-11-18 08:03:46.728543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.881 [2024-11-18 08:03:46.728566] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f480, cid 0, qid 0 00:29:53.881 [2024-11-18 08:03:46.728688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.881 [2024-11-18 08:03:46.728702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.881 [2024-11-18 08:03:46.728709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.881 [2024-11-18 08:03:46.728716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f480) on tqpair=0x1203d80 00:29:53.881 [2024-11-18 08:03:46.728724] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:53.881 [2024-11-18 08:03:46.728749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.881 [2024-11-18 08:03:46.728762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.881 [2024-11-18 08:03:46.728770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1203d80) 00:29:53.881 [2024-11-18 08:03:46.728780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.882 [2024-11-18 08:03:46.728801] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f480, cid 0, qid 0 00:29:53.882 [2024-11-18 08:03:46.728877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.882 [2024-11-18 08:03:46.728889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.882 [2024-11-18 08:03:46.728896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.728902] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f480) on tqpair=0x1203d80 00:29:53.882 [2024-11-18 08:03:46.728910] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:53.882 [2024-11-18 08:03:46.728919] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:53.882 [2024-11-18 08:03:46.728932] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:53.882 [2024-11-18 08:03:46.728951] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:53.882 [2024-11-18 08:03:46.728965] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.728973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1203d80) 00:29:53.882 [2024-11-18 08:03:46.728983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.882 [2024-11-18 08:03:46.729005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f480, cid 0, qid 0 00:29:53.882 [2024-11-18 08:03:46.729119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:53.882 [2024-11-18 08:03:46.729133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:53.882 [2024-11-18 08:03:46.729140] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.729146] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1203d80): datao=0, datal=4096, cccid=0 00:29:53.882 [2024-11-18 08:03:46.729154] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x126f480) on tqpair(0x1203d80): expected_datao=0, payload_size=4096 00:29:53.882 [2024-11-18 08:03:46.729162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.729179] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.729188] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.729222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.882 [2024-11-18 08:03:46.729233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.882 [2024-11-18 08:03:46.729240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.729246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f480) on tqpair=0x1203d80 00:29:53.882 [2024-11-18 08:03:46.729257] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:53.882 [2024-11-18 08:03:46.729265] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:53.882 [2024-11-18 08:03:46.729273] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:53.882 [2024-11-18 08:03:46.729287] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:53.882 [2024-11-18 08:03:46.729296] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:53.882 [2024-11-18 08:03:46.729307] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:53.882 [2024-11-18 08:03:46.729325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:53.882 [2024-11-18 08:03:46.729339] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.729346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.729353] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1203d80) 00:29:53.882 [2024-11-18 08:03:46.729363] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:53.882 [2024-11-18 08:03:46.729385] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f480, cid 0, qid 0 00:29:53.882 [2024-11-18 08:03:46.729461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.882 [2024-11-18 08:03:46.729474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.882 [2024-11-18 08:03:46.729481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.729487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f480) on tqpair=0x1203d80 00:29:53.882 [2024-11-18 08:03:46.729510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.729519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.729526] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1203d80) 00:29:53.882 [2024-11-18 08:03:46.729536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.882 [2024-11-18 08:03:46.729547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.729554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.729561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1203d80) 00:29:53.882 [2024-11-18 08:03:46.729569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.882 [2024-11-18 08:03:46.729579] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.729586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.729592] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1203d80) 00:29:53.882 [2024-11-18 08:03:46.729600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.882 [2024-11-18 08:03:46.729610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.729617] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.729623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.882 [2024-11-18 08:03:46.729632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.882 [2024-11-18 08:03:46.729640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:53.882 [2024-11-18 08:03:46.729655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:53.882 [2024-11-18 08:03:46.729667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.729675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1203d80) 00:29:53.882 [2024-11-18 08:03:46.729685] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.882 [2024-11-18 08:03:46.729708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f480, cid 0, qid 0 00:29:53.882 [2024-11-18 08:03:46.729723] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f600, cid 1, qid 0 00:29:53.882 [2024-11-18 08:03:46.729731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f780, cid 2, qid 0 00:29:53.882 [2024-11-18 08:03:46.729739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.882 [2024-11-18 08:03:46.729746] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fa80, cid 4, qid 0 00:29:53.882 [2024-11-18 08:03:46.729882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.882 [2024-11-18 08:03:46.729894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.882 [2024-11-18 08:03:46.729901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.729907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fa80) on tqpair=0x1203d80 00:29:53.882 [2024-11-18 08:03:46.729919] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:53.882 [2024-11-18 08:03:46.729929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:53.882 [2024-11-18 08:03:46.729944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:53.882 [2024-11-18 08:03:46.729956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:53.882 [2024-11-18 08:03:46.729967] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.729975] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.729981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1203d80) 00:29:53.882 [2024-11-18 08:03:46.729991] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:53.882 [2024-11-18 08:03:46.730013] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fa80, cid 4, qid 0 00:29:53.882 [2024-11-18 08:03:46.730117] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.882 [2024-11-18 08:03:46.730132] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.882 [2024-11-18 08:03:46.730139] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.730147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fa80) on tqpair=0x1203d80 00:29:53.882 [2024-11-18 08:03:46.730218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:53.882 [2024-11-18 08:03:46.730239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:53.882 [2024-11-18 08:03:46.730255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.730263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1203d80) 00:29:53.882 [2024-11-18 08:03:46.730273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.882 [2024-11-18 08:03:46.730295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fa80, cid 4, qid 0 00:29:53.882 [2024-11-18 08:03:46.730392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:53.882 [2024-11-18 08:03:46.730407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:53.882 [2024-11-18 08:03:46.730414] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:53.882 [2024-11-18 08:03:46.730420] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1203d80): datao=0, datal=4096, cccid=4 00:29:53.882 [2024-11-18 08:03:46.730428] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x126fa80) on tqpair(0x1203d80): expected_datao=0, payload_size=4096 00:29:53.882 [2024-11-18 08:03:46.730439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.730457] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.730466] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.770623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.883 [2024-11-18 08:03:46.770643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.883 [2024-11-18 08:03:46.770651] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.770658] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fa80) on tqpair=0x1203d80 00:29:53.883 [2024-11-18 08:03:46.770675] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:53.883 [2024-11-18 08:03:46.770699] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:53.883 [2024-11-18 08:03:46.770718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:53.883 [2024-11-18 08:03:46.770732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.770740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1203d80) 00:29:53.883 [2024-11-18 08:03:46.770752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.883 [2024-11-18 08:03:46.770780] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fa80, cid 4, qid 0 00:29:53.883 [2024-11-18 08:03:46.770890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:53.883 [2024-11-18 08:03:46.770905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:53.883 [2024-11-18 08:03:46.770912] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.770918] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1203d80): datao=0, datal=4096, cccid=4 00:29:53.883 [2024-11-18 08:03:46.770926] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x126fa80) on tqpair(0x1203d80): expected_datao=0, payload_size=4096 00:29:53.883 [2024-11-18 08:03:46.770933] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.770951] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.770960] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.813503] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.883 [2024-11-18 08:03:46.813524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.883 [2024-11-18 08:03:46.813532] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.813539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fa80) on tqpair=0x1203d80 00:29:53.883 [2024-11-18 08:03:46.813565] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:53.883 [2024-11-18 08:03:46.813586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:53.883 [2024-11-18 08:03:46.813617] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.813625] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1203d80) 00:29:53.883 [2024-11-18 08:03:46.813637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.883 [2024-11-18 08:03:46.813662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fa80, cid 4, qid 0 00:29:53.883 [2024-11-18 08:03:46.813786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:53.883 [2024-11-18 08:03:46.813800] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:53.883 [2024-11-18 08:03:46.813808] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.813818] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1203d80): datao=0, datal=4096, cccid=4 00:29:53.883 [2024-11-18 08:03:46.813827] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x126fa80) on tqpair(0x1203d80): expected_datao=0, payload_size=4096 00:29:53.883 [2024-11-18 08:03:46.813834] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.813852] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.813862] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.854599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.883 [2024-11-18 08:03:46.854619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.883 [2024-11-18 08:03:46.854626] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.854633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fa80) on tqpair=0x1203d80 00:29:53.883 [2024-11-18 08:03:46.854649] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:53.883 [2024-11-18 08:03:46.854665] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:53.883 [2024-11-18 08:03:46.854682] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:53.883 [2024-11-18 08:03:46.854695] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:53.883 [2024-11-18 08:03:46.854705] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:53.883 [2024-11-18 08:03:46.854714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:53.883 [2024-11-18 08:03:46.854724] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:53.883 [2024-11-18 08:03:46.854732] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:53.883 [2024-11-18 08:03:46.854741] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:53.883 [2024-11-18 08:03:46.854760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.854770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1203d80) 00:29:53.883 [2024-11-18 08:03:46.854782] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.883 [2024-11-18 08:03:46.854793] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.854800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.854806] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1203d80) 00:29:53.883 [2024-11-18 08:03:46.854816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.883 [2024-11-18 08:03:46.854843] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fa80, cid 4, qid 0 00:29:53.883 [2024-11-18 08:03:46.854856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fc00, cid 5, qid 0 00:29:53.883 [2024-11-18 08:03:46.854949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.883 [2024-11-18 08:03:46.854961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.883 [2024-11-18 08:03:46.854968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.854975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fa80) on tqpair=0x1203d80 00:29:53.883 [2024-11-18 08:03:46.854985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.883 [2024-11-18 08:03:46.854999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.883 [2024-11-18 08:03:46.855007] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.855013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fc00) on tqpair=0x1203d80 00:29:53.883 [2024-11-18 08:03:46.855029] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.855038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1203d80) 00:29:53.883 [2024-11-18 08:03:46.855049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.883 [2024-11-18 08:03:46.855071] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fc00, cid 5, qid 0 00:29:53.883 [2024-11-18 08:03:46.855146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.883 [2024-11-18 08:03:46.855158] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.883 [2024-11-18 08:03:46.855165] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.855172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fc00) on tqpair=0x1203d80 00:29:53.883 [2024-11-18 08:03:46.855187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.855196] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1203d80) 00:29:53.883 [2024-11-18 08:03:46.855207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.883 [2024-11-18 08:03:46.855227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fc00, cid 5, qid 0 00:29:53.883 [2024-11-18 08:03:46.855299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.883 [2024-11-18 08:03:46.855312] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.883 [2024-11-18 08:03:46.855318] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.855325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fc00) on tqpair=0x1203d80 00:29:53.883 [2024-11-18 08:03:46.855340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.855350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1203d80) 00:29:53.883 [2024-11-18 08:03:46.855360] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.883 [2024-11-18 08:03:46.855380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fc00, cid 5, qid 0 00:29:53.883 [2024-11-18 08:03:46.855462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.883 [2024-11-18 08:03:46.855474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.883 [2024-11-18 08:03:46.855481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.855487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fc00) on tqpair=0x1203d80 00:29:53.883 [2024-11-18 08:03:46.855521] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.855533] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1203d80) 00:29:53.883 [2024-11-18 08:03:46.855544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.883 [2024-11-18 08:03:46.855556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.883 [2024-11-18 08:03:46.855564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1203d80) 00:29:53.883 [2024-11-18 08:03:46.855573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.884 [2024-11-18 08:03:46.855585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.855593] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1203d80) 00:29:53.884 [2024-11-18 08:03:46.855605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.884 [2024-11-18 08:03:46.855618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.855626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1203d80) 00:29:53.884 [2024-11-18 08:03:46.855635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.884 [2024-11-18 08:03:46.855658] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fc00, cid 5, qid 0 00:29:53.884 [2024-11-18 08:03:46.855669] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fa80, cid 4, qid 0 00:29:53.884 [2024-11-18 08:03:46.855676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fd80, cid 6, qid 0 00:29:53.884 [2024-11-18 08:03:46.855684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126ff00, cid 7, qid 0 00:29:53.884 [2024-11-18 08:03:46.855885] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:53.884 [2024-11-18 08:03:46.855900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:53.884 [2024-11-18 08:03:46.855907] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.855914] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1203d80): datao=0, datal=8192, cccid=5 00:29:53.884 [2024-11-18 08:03:46.855921] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x126fc00) on tqpair(0x1203d80): expected_datao=0, payload_size=8192 00:29:53.884 [2024-11-18 08:03:46.855929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.855948] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.855957] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.855969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:53.884 [2024-11-18 08:03:46.855979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:53.884 [2024-11-18 08:03:46.855985] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.855992] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1203d80): datao=0, datal=512, cccid=4 00:29:53.884 [2024-11-18 08:03:46.855999] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x126fa80) on tqpair(0x1203d80): expected_datao=0, payload_size=512 00:29:53.884 [2024-11-18 08:03:46.856006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.856015] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.856023] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.856031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:53.884 [2024-11-18 08:03:46.856040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:53.884 [2024-11-18 08:03:46.856046] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.856053] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1203d80): datao=0, datal=512, cccid=6 00:29:53.884 [2024-11-18 08:03:46.856060] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x126fd80) on tqpair(0x1203d80): expected_datao=0, payload_size=512 00:29:53.884 [2024-11-18 08:03:46.856067] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.856076] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.856083] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.856092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:53.884 [2024-11-18 08:03:46.856101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:53.884 [2024-11-18 08:03:46.856107] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.856113] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1203d80): datao=0, datal=4096, cccid=7 00:29:53.884 [2024-11-18 08:03:46.856127] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x126ff00) on tqpair(0x1203d80): expected_datao=0, payload_size=4096 00:29:53.884 [2024-11-18 08:03:46.856135] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.856145] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.856152] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.856164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.884 [2024-11-18 08:03:46.856173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.884 [2024-11-18 08:03:46.856179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.856186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fc00) on tqpair=0x1203d80 00:29:53.884 [2024-11-18 08:03:46.856205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.884 [2024-11-18 08:03:46.856216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.884 [2024-11-18 08:03:46.856223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.856229] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fa80) on tqpair=0x1203d80 00:29:53.884 [2024-11-18 08:03:46.856259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.884 [2024-11-18 08:03:46.856270] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.884 [2024-11-18 08:03:46.856276] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.856282] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fd80) on tqpair=0x1203d80 00:29:53.884 [2024-11-18 08:03:46.856293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.884 [2024-11-18 08:03:46.856317] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.884 [2024-11-18 08:03:46.856324] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.884 [2024-11-18 08:03:46.856330] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126ff00) on tqpair=0x1203d80 00:29:53.884 ===================================================== 00:29:53.884 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:53.884 ===================================================== 00:29:53.884 Controller Capabilities/Features 00:29:53.884 ================================ 00:29:53.884 Vendor ID: 8086 00:29:53.884 Subsystem Vendor ID: 8086 00:29:53.884 Serial Number: SPDK00000000000001 00:29:53.884 Model Number: SPDK bdev Controller 00:29:53.884 Firmware Version: 25.01 00:29:53.884 Recommended Arb Burst: 6 00:29:53.884 IEEE OUI Identifier: e4 d2 5c 00:29:53.884 Multi-path I/O 00:29:53.884 May have multiple subsystem ports: Yes 00:29:53.884 May have multiple controllers: Yes 00:29:53.884 Associated with SR-IOV VF: No 00:29:53.884 Max Data Transfer Size: 131072 00:29:53.884 Max Number of Namespaces: 32 00:29:53.884 Max Number of I/O Queues: 127 00:29:53.884 NVMe Specification Version (VS): 1.3 00:29:53.884 NVMe Specification Version (Identify): 1.3 00:29:53.884 Maximum Queue Entries: 128 00:29:53.884 Contiguous Queues Required: Yes 00:29:53.884 Arbitration Mechanisms Supported 00:29:53.884 Weighted Round Robin: Not Supported 00:29:53.884 Vendor Specific: Not Supported 00:29:53.884 Reset Timeout: 15000 ms 00:29:53.884 Doorbell Stride: 4 bytes 00:29:53.884 NVM Subsystem Reset: Not Supported 00:29:53.884 Command Sets Supported 00:29:53.884 NVM Command Set: Supported 00:29:53.884 Boot Partition: Not Supported 00:29:53.884 Memory Page Size Minimum: 4096 bytes 00:29:53.884 Memory Page Size Maximum: 4096 bytes 00:29:53.884 Persistent Memory Region: Not Supported 00:29:53.884 Optional Asynchronous Events Supported 00:29:53.884 Namespace Attribute Notices: Supported 00:29:53.884 Firmware Activation Notices: Not Supported 00:29:53.884 ANA Change Notices: Not Supported 00:29:53.884 PLE Aggregate Log Change Notices: Not Supported 00:29:53.884 LBA Status Info Alert Notices: Not Supported 00:29:53.884 EGE Aggregate Log Change Notices: Not Supported 00:29:53.884 Normal NVM Subsystem Shutdown event: Not Supported 00:29:53.884 Zone Descriptor Change Notices: Not Supported 00:29:53.884 Discovery Log Change Notices: Not Supported 00:29:53.884 Controller Attributes 00:29:53.884 128-bit Host Identifier: Supported 00:29:53.884 Non-Operational Permissive Mode: Not Supported 00:29:53.884 NVM Sets: Not Supported 00:29:53.884 Read Recovery Levels: Not Supported 00:29:53.884 Endurance Groups: Not Supported 00:29:53.884 Predictable Latency Mode: Not Supported 00:29:53.884 Traffic Based Keep ALive: Not Supported 00:29:53.884 Namespace Granularity: Not Supported 00:29:53.884 SQ Associations: Not Supported 00:29:53.884 UUID List: Not Supported 00:29:53.884 Multi-Domain Subsystem: Not Supported 00:29:53.884 Fixed Capacity Management: Not Supported 00:29:53.884 Variable Capacity Management: Not Supported 00:29:53.884 Delete Endurance Group: Not Supported 00:29:53.884 Delete NVM Set: Not Supported 00:29:53.884 Extended LBA Formats Supported: Not Supported 00:29:53.884 Flexible Data Placement Supported: Not Supported 00:29:53.884 00:29:53.884 Controller Memory Buffer Support 00:29:53.884 ================================ 00:29:53.884 Supported: No 00:29:53.884 00:29:53.884 Persistent Memory Region Support 00:29:53.884 ================================ 00:29:53.884 Supported: No 00:29:53.884 00:29:53.884 Admin Command Set Attributes 00:29:53.884 ============================ 00:29:53.884 Security Send/Receive: Not Supported 00:29:53.884 Format NVM: Not Supported 00:29:53.884 Firmware Activate/Download: Not Supported 00:29:53.884 Namespace Management: Not Supported 00:29:53.884 Device Self-Test: Not Supported 00:29:53.884 Directives: Not Supported 00:29:53.884 NVMe-MI: Not Supported 00:29:53.884 Virtualization Management: Not Supported 00:29:53.884 Doorbell Buffer Config: Not Supported 00:29:53.884 Get LBA Status Capability: Not Supported 00:29:53.884 Command & Feature Lockdown Capability: Not Supported 00:29:53.884 Abort Command Limit: 4 00:29:53.884 Async Event Request Limit: 4 00:29:53.885 Number of Firmware Slots: N/A 00:29:53.885 Firmware Slot 1 Read-Only: N/A 00:29:53.885 Firmware Activation Without Reset: N/A 00:29:53.885 Multiple Update Detection Support: N/A 00:29:53.885 Firmware Update Granularity: No Information Provided 00:29:53.885 Per-Namespace SMART Log: No 00:29:53.885 Asymmetric Namespace Access Log Page: Not Supported 00:29:53.885 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:53.885 Command Effects Log Page: Supported 00:29:53.885 Get Log Page Extended Data: Supported 00:29:53.885 Telemetry Log Pages: Not Supported 00:29:53.885 Persistent Event Log Pages: Not Supported 00:29:53.885 Supported Log Pages Log Page: May Support 00:29:53.885 Commands Supported & Effects Log Page: Not Supported 00:29:53.885 Feature Identifiers & Effects Log Page:May Support 00:29:53.885 NVMe-MI Commands & Effects Log Page: May Support 00:29:53.885 Data Area 4 for Telemetry Log: Not Supported 00:29:53.885 Error Log Page Entries Supported: 128 00:29:53.885 Keep Alive: Supported 00:29:53.885 Keep Alive Granularity: 10000 ms 00:29:53.885 00:29:53.885 NVM Command Set Attributes 00:29:53.885 ========================== 00:29:53.885 Submission Queue Entry Size 00:29:53.885 Max: 64 00:29:53.885 Min: 64 00:29:53.885 Completion Queue Entry Size 00:29:53.885 Max: 16 00:29:53.885 Min: 16 00:29:53.885 Number of Namespaces: 32 00:29:53.885 Compare Command: Supported 00:29:53.885 Write Uncorrectable Command: Not Supported 00:29:53.885 Dataset Management Command: Supported 00:29:53.885 Write Zeroes Command: Supported 00:29:53.885 Set Features Save Field: Not Supported 00:29:53.885 Reservations: Supported 00:29:53.885 Timestamp: Not Supported 00:29:53.885 Copy: Supported 00:29:53.885 Volatile Write Cache: Present 00:29:53.885 Atomic Write Unit (Normal): 1 00:29:53.885 Atomic Write Unit (PFail): 1 00:29:53.885 Atomic Compare & Write Unit: 1 00:29:53.885 Fused Compare & Write: Supported 00:29:53.885 Scatter-Gather List 00:29:53.885 SGL Command Set: Supported 00:29:53.885 SGL Keyed: Supported 00:29:53.885 SGL Bit Bucket Descriptor: Not Supported 00:29:53.885 SGL Metadata Pointer: Not Supported 00:29:53.885 Oversized SGL: Not Supported 00:29:53.885 SGL Metadata Address: Not Supported 00:29:53.885 SGL Offset: Supported 00:29:53.885 Transport SGL Data Block: Not Supported 00:29:53.885 Replay Protected Memory Block: Not Supported 00:29:53.885 00:29:53.885 Firmware Slot Information 00:29:53.885 ========================= 00:29:53.885 Active slot: 1 00:29:53.885 Slot 1 Firmware Revision: 25.01 00:29:53.885 00:29:53.885 00:29:53.885 Commands Supported and Effects 00:29:53.885 ============================== 00:29:53.885 Admin Commands 00:29:53.885 -------------- 00:29:53.885 Get Log Page (02h): Supported 00:29:53.885 Identify (06h): Supported 00:29:53.885 Abort (08h): Supported 00:29:53.885 Set Features (09h): Supported 00:29:53.885 Get Features (0Ah): Supported 00:29:53.885 Asynchronous Event Request (0Ch): Supported 00:29:53.885 Keep Alive (18h): Supported 00:29:53.885 I/O Commands 00:29:53.885 ------------ 00:29:53.885 Flush (00h): Supported LBA-Change 00:29:53.885 Write (01h): Supported LBA-Change 00:29:53.885 Read (02h): Supported 00:29:53.885 Compare (05h): Supported 00:29:53.885 Write Zeroes (08h): Supported LBA-Change 00:29:53.885 Dataset Management (09h): Supported LBA-Change 00:29:53.885 Copy (19h): Supported LBA-Change 00:29:53.885 00:29:53.885 Error Log 00:29:53.885 ========= 00:29:53.885 00:29:53.885 Arbitration 00:29:53.885 =========== 00:29:53.885 Arbitration Burst: 1 00:29:53.885 00:29:53.885 Power Management 00:29:53.885 ================ 00:29:53.885 Number of Power States: 1 00:29:53.885 Current Power State: Power State #0 00:29:53.885 Power State #0: 00:29:53.885 Max Power: 0.00 W 00:29:53.885 Non-Operational State: Operational 00:29:53.885 Entry Latency: Not Reported 00:29:53.885 Exit Latency: Not Reported 00:29:53.885 Relative Read Throughput: 0 00:29:53.885 Relative Read Latency: 0 00:29:53.885 Relative Write Throughput: 0 00:29:53.885 Relative Write Latency: 0 00:29:53.885 Idle Power: Not Reported 00:29:53.885 Active Power: Not Reported 00:29:53.885 Non-Operational Permissive Mode: Not Supported 00:29:53.885 00:29:53.885 Health Information 00:29:53.885 ================== 00:29:53.885 Critical Warnings: 00:29:53.885 Available Spare Space: OK 00:29:53.885 Temperature: OK 00:29:53.885 Device Reliability: OK 00:29:53.885 Read Only: No 00:29:53.885 Volatile Memory Backup: OK 00:29:53.885 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:53.885 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:53.885 Available Spare: 0% 00:29:53.885 Available Spare Threshold: 0% 00:29:53.885 Life Percentage Used:[2024-11-18 08:03:46.856455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.885 [2024-11-18 08:03:46.856468] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1203d80) 00:29:53.885 [2024-11-18 08:03:46.856502] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.885 [2024-11-18 08:03:46.856527] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126ff00, cid 7, qid 0 00:29:53.885 [2024-11-18 08:03:46.856650] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.885 [2024-11-18 08:03:46.856663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.885 [2024-11-18 08:03:46.856670] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.885 [2024-11-18 08:03:46.856677] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126ff00) on tqpair=0x1203d80 00:29:53.885 [2024-11-18 08:03:46.856720] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:53.885 [2024-11-18 08:03:46.856740] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f480) on tqpair=0x1203d80 00:29:53.885 [2024-11-18 08:03:46.856751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.885 [2024-11-18 08:03:46.856760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f600) on tqpair=0x1203d80 00:29:53.885 [2024-11-18 08:03:46.856768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.885 [2024-11-18 08:03:46.856776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f780) on tqpair=0x1203d80 00:29:53.885 [2024-11-18 08:03:46.856784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.885 [2024-11-18 08:03:46.856798] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.885 [2024-11-18 08:03:46.856807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.885 [2024-11-18 08:03:46.856819] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.885 [2024-11-18 08:03:46.856828] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.885 [2024-11-18 08:03:46.856834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.885 [2024-11-18 08:03:46.856845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.885 [2024-11-18 08:03:46.856867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.885 [2024-11-18 08:03:46.856980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.885 [2024-11-18 08:03:46.856995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.885 [2024-11-18 08:03:46.857002] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.885 [2024-11-18 08:03:46.857008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.885 [2024-11-18 08:03:46.857019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.885 [2024-11-18 08:03:46.857027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.885 [2024-11-18 08:03:46.857033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.885 [2024-11-18 08:03:46.857043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.886 [2024-11-18 08:03:46.857069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.886 [2024-11-18 08:03:46.857156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.886 [2024-11-18 08:03:46.857168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.886 [2024-11-18 08:03:46.857175] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.857181] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.886 [2024-11-18 08:03:46.857190] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:53.886 [2024-11-18 08:03:46.857197] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:53.886 [2024-11-18 08:03:46.857213] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.857222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.857228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.886 [2024-11-18 08:03:46.857238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.886 [2024-11-18 08:03:46.857259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.886 [2024-11-18 08:03:46.857341] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.886 [2024-11-18 08:03:46.857353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.886 [2024-11-18 08:03:46.857360] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.857367] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.886 [2024-11-18 08:03:46.857383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.857392] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.857399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.886 [2024-11-18 08:03:46.857409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.886 [2024-11-18 08:03:46.857430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.886 [2024-11-18 08:03:46.857504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.886 [2024-11-18 08:03:46.857519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.886 [2024-11-18 08:03:46.857526] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.857532] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.886 [2024-11-18 08:03:46.857549] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.857558] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.857564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.886 [2024-11-18 08:03:46.857575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.886 [2024-11-18 08:03:46.857596] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.886 [2024-11-18 08:03:46.857671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.886 [2024-11-18 08:03:46.857685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.886 [2024-11-18 08:03:46.857692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.857698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.886 [2024-11-18 08:03:46.857715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.857724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.857730] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.886 [2024-11-18 08:03:46.857741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.886 [2024-11-18 08:03:46.857761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.886 [2024-11-18 08:03:46.857835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.886 [2024-11-18 08:03:46.857847] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.886 [2024-11-18 08:03:46.857853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.857860] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.886 [2024-11-18 08:03:46.857876] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.857885] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.857891] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.886 [2024-11-18 08:03:46.857901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.886 [2024-11-18 08:03:46.857922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.886 [2024-11-18 08:03:46.858008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.886 [2024-11-18 08:03:46.858021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.886 [2024-11-18 08:03:46.858028] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.858035] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.886 [2024-11-18 08:03:46.858051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.858060] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.858067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.886 [2024-11-18 08:03:46.858077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.886 [2024-11-18 08:03:46.858097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.886 [2024-11-18 08:03:46.858175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.886 [2024-11-18 08:03:46.858192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.886 [2024-11-18 08:03:46.858200] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.858207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.886 [2024-11-18 08:03:46.858223] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.858232] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.858239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.886 [2024-11-18 08:03:46.858249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.886 [2024-11-18 08:03:46.858270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.886 [2024-11-18 08:03:46.858341] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.886 [2024-11-18 08:03:46.858352] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.886 [2024-11-18 08:03:46.858359] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.858366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.886 [2024-11-18 08:03:46.858381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.858391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.858397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.886 [2024-11-18 08:03:46.858407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.886 [2024-11-18 08:03:46.858428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.886 [2024-11-18 08:03:46.858502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.886 [2024-11-18 08:03:46.858516] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.886 [2024-11-18 08:03:46.858523] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.858529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.886 [2024-11-18 08:03:46.858545] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.858554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.858561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.886 [2024-11-18 08:03:46.858571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.886 [2024-11-18 08:03:46.858592] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.886 [2024-11-18 08:03:46.858677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.886 [2024-11-18 08:03:46.858691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.886 [2024-11-18 08:03:46.858697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.858704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.886 [2024-11-18 08:03:46.858720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.858729] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.858736] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.886 [2024-11-18 08:03:46.858746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.886 [2024-11-18 08:03:46.858767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.886 [2024-11-18 08:03:46.858836] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.886 [2024-11-18 08:03:46.858847] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.886 [2024-11-18 08:03:46.858858] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.858865] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.886 [2024-11-18 08:03:46.858881] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.858890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.886 [2024-11-18 08:03:46.858897] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.886 [2024-11-18 08:03:46.858907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.886 [2024-11-18 08:03:46.858928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.886 [2024-11-18 08:03:46.859000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.886 [2024-11-18 08:03:46.859012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.886 [2024-11-18 08:03:46.859019] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.859025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.887 [2024-11-18 08:03:46.859041] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.859050] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.859057] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.887 [2024-11-18 08:03:46.859067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.887 [2024-11-18 08:03:46.859087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.887 [2024-11-18 08:03:46.859168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.887 [2024-11-18 08:03:46.859181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.887 [2024-11-18 08:03:46.859188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.859195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.887 [2024-11-18 08:03:46.859211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.859220] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.859227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.887 [2024-11-18 08:03:46.859237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.887 [2024-11-18 08:03:46.859257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.887 [2024-11-18 08:03:46.859333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.887 [2024-11-18 08:03:46.859347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.887 [2024-11-18 08:03:46.859354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.859360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.887 [2024-11-18 08:03:46.859376] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.859386] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.859392] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.887 [2024-11-18 08:03:46.859402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.887 [2024-11-18 08:03:46.859423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.887 [2024-11-18 08:03:46.859509] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.887 [2024-11-18 08:03:46.859524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.887 [2024-11-18 08:03:46.859531] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.859541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.887 [2024-11-18 08:03:46.859559] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.859568] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.859575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.887 [2024-11-18 08:03:46.859585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.887 [2024-11-18 08:03:46.859607] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.887 [2024-11-18 08:03:46.859681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.887 [2024-11-18 08:03:46.859693] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.887 [2024-11-18 08:03:46.859700] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.859706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.887 [2024-11-18 08:03:46.859722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.859731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.859738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.887 [2024-11-18 08:03:46.859748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.887 [2024-11-18 08:03:46.859768] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.887 [2024-11-18 08:03:46.859846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.887 [2024-11-18 08:03:46.859860] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.887 [2024-11-18 08:03:46.859867] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.859873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.887 [2024-11-18 08:03:46.859889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.859899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.859905] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.887 [2024-11-18 08:03:46.859915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.887 [2024-11-18 08:03:46.859936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.887 [2024-11-18 08:03:46.860007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.887 [2024-11-18 08:03:46.860019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.887 [2024-11-18 08:03:46.860026] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.860032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.887 [2024-11-18 08:03:46.860048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.860057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.860064] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.887 [2024-11-18 08:03:46.860074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.887 [2024-11-18 08:03:46.860095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.887 [2024-11-18 08:03:46.860166] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.887 [2024-11-18 08:03:46.860180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.887 [2024-11-18 08:03:46.860187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.860193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.887 [2024-11-18 08:03:46.860214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.860224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.860230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.887 [2024-11-18 08:03:46.860240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.887 [2024-11-18 08:03:46.860261] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.887 [2024-11-18 08:03:46.860334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.887 [2024-11-18 08:03:46.860347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.887 [2024-11-18 08:03:46.860354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.860361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.887 [2024-11-18 08:03:46.860377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.860386] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.860392] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.887 [2024-11-18 08:03:46.860403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.887 [2024-11-18 08:03:46.860423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.887 [2024-11-18 08:03:46.860504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.887 [2024-11-18 08:03:46.860519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.887 [2024-11-18 08:03:46.860526] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.860533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.887 [2024-11-18 08:03:46.860550] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.860559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.860566] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.887 [2024-11-18 08:03:46.860576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.887 [2024-11-18 08:03:46.860597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.887 [2024-11-18 08:03:46.860668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.887 [2024-11-18 08:03:46.860680] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.887 [2024-11-18 08:03:46.860687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.860694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.887 [2024-11-18 08:03:46.860710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.860719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.860725] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.887 [2024-11-18 08:03:46.860735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.887 [2024-11-18 08:03:46.860756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.887 [2024-11-18 08:03:46.860830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.887 [2024-11-18 08:03:46.860841] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.887 [2024-11-18 08:03:46.860848] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.860855] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.887 [2024-11-18 08:03:46.860870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.860883] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.887 [2024-11-18 08:03:46.860891] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.887 [2024-11-18 08:03:46.860901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.887 [2024-11-18 08:03:46.860921] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.887 [2024-11-18 08:03:46.860992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.887 [2024-11-18 08:03:46.861004] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.888 [2024-11-18 08:03:46.861010] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.888 [2024-11-18 08:03:46.861017] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.888 [2024-11-18 08:03:46.861033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.888 [2024-11-18 08:03:46.861042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.888 [2024-11-18 08:03:46.861048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.888 [2024-11-18 08:03:46.861058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.888 [2024-11-18 08:03:46.861078] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.888 [2024-11-18 08:03:46.861159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.888 [2024-11-18 08:03:46.861173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.888 [2024-11-18 08:03:46.861179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.888 [2024-11-18 08:03:46.861186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.888 [2024-11-18 08:03:46.861202] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.888 [2024-11-18 08:03:46.861211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.888 [2024-11-18 08:03:46.861218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.888 [2024-11-18 08:03:46.861228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.888 [2024-11-18 08:03:46.861249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.888 [2024-11-18 08:03:46.861323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.888 [2024-11-18 08:03:46.861337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.888 [2024-11-18 08:03:46.861343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.888 [2024-11-18 08:03:46.861350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.888 [2024-11-18 08:03:46.861366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.888 [2024-11-18 08:03:46.861375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.888 [2024-11-18 08:03:46.861382] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.888 [2024-11-18 08:03:46.861392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.888 [2024-11-18 08:03:46.861413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.888 [2024-11-18 08:03:46.865502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.888 [2024-11-18 08:03:46.865519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.888 [2024-11-18 08:03:46.865526] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.888 [2024-11-18 08:03:46.865533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.888 [2024-11-18 08:03:46.865565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.888 [2024-11-18 08:03:46.865576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.888 [2024-11-18 08:03:46.865587] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1203d80) 00:29:53.888 [2024-11-18 08:03:46.865599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.888 [2024-11-18 08:03:46.865622] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126f900, cid 3, qid 0 00:29:53.888 [2024-11-18 08:03:46.865753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.888 [2024-11-18 08:03:46.865765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.888 [2024-11-18 08:03:46.865772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.888 [2024-11-18 08:03:46.865778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126f900) on tqpair=0x1203d80 00:29:53.888 [2024-11-18 08:03:46.865791] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 8 milliseconds 00:29:53.888 0% 00:29:53.888 Data Units Read: 0 00:29:53.888 Data Units Written: 0 00:29:53.888 Host Read Commands: 0 00:29:53.888 Host Write Commands: 0 00:29:53.888 Controller Busy Time: 0 minutes 00:29:53.888 Power Cycles: 0 00:29:53.888 Power On Hours: 0 hours 00:29:53.888 Unsafe Shutdowns: 0 00:29:53.888 Unrecoverable Media Errors: 0 00:29:53.888 Lifetime Error Log Entries: 0 00:29:53.888 Warning Temperature Time: 0 minutes 00:29:53.888 Critical Temperature Time: 0 minutes 00:29:53.888 00:29:53.888 Number of Queues 00:29:53.888 ================ 00:29:53.888 Number of I/O Submission Queues: 127 00:29:53.888 Number of I/O Completion Queues: 127 00:29:53.888 00:29:53.888 Active Namespaces 00:29:53.888 ================= 00:29:53.888 Namespace ID:1 00:29:53.888 Error Recovery Timeout: Unlimited 00:29:53.888 Command Set Identifier: NVM (00h) 00:29:53.888 Deallocate: Supported 00:29:53.888 Deallocated/Unwritten Error: Not Supported 00:29:53.888 Deallocated Read Value: Unknown 00:29:53.888 Deallocate in Write Zeroes: Not Supported 00:29:53.888 Deallocated Guard Field: 0xFFFF 00:29:53.888 Flush: Supported 00:29:53.888 Reservation: Supported 00:29:53.888 Namespace Sharing Capabilities: Multiple Controllers 00:29:53.888 Size (in LBAs): 131072 (0GiB) 00:29:53.888 Capacity (in LBAs): 131072 (0GiB) 00:29:53.888 Utilization (in LBAs): 131072 (0GiB) 00:29:53.888 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:53.888 EUI64: ABCDEF0123456789 00:29:53.888 UUID: f730b0b7-609d-4321-b738-15b0076e0b6d 00:29:53.888 Thin Provisioning: Not Supported 00:29:53.888 Per-NS Atomic Units: Yes 00:29:53.888 Atomic Boundary Size (Normal): 0 00:29:53.888 Atomic Boundary Size (PFail): 0 00:29:53.888 Atomic Boundary Offset: 0 00:29:53.888 Maximum Single Source Range Length: 65535 00:29:53.888 Maximum Copy Length: 65535 00:29:53.888 Maximum Source Range Count: 1 00:29:53.888 NGUID/EUI64 Never Reused: No 00:29:53.888 Namespace Write Protected: No 00:29:53.888 Number of LBA Formats: 1 00:29:53.888 Current LBA Format: LBA Format #00 00:29:53.888 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:53.888 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:53.888 rmmod nvme_tcp 00:29:53.888 rmmod nvme_fabrics 00:29:53.888 rmmod nvme_keyring 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 832376 ']' 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 832376 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 832376 ']' 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 832376 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:53.888 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 832376 00:29:54.147 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:54.147 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:54.147 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 832376' 00:29:54.147 killing process with pid 832376 00:29:54.147 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 832376 00:29:54.147 08:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 832376 00:29:54.147 08:03:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:54.147 08:03:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:54.147 08:03:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:54.147 08:03:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:54.147 08:03:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:29:54.147 08:03:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:54.147 08:03:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:29:54.147 08:03:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:54.147 08:03:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:54.147 08:03:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.147 08:03:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.147 08:03:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.687 08:03:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:56.687 00:29:56.687 real 0m5.660s 00:29:56.687 user 0m4.822s 00:29:56.687 sys 0m2.011s 00:29:56.687 08:03:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:56.687 08:03:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:56.687 ************************************ 00:29:56.687 END TEST nvmf_identify 00:29:56.687 ************************************ 00:29:56.687 08:03:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:56.687 08:03:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:56.687 08:03:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:56.687 08:03:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.687 ************************************ 00:29:56.687 START TEST nvmf_perf 00:29:56.687 ************************************ 00:29:56.687 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:56.687 * Looking for test storage... 00:29:56.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:56.687 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:56.687 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:56.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.688 --rc genhtml_branch_coverage=1 00:29:56.688 --rc genhtml_function_coverage=1 00:29:56.688 --rc genhtml_legend=1 00:29:56.688 --rc geninfo_all_blocks=1 00:29:56.688 --rc geninfo_unexecuted_blocks=1 00:29:56.688 00:29:56.688 ' 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:56.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.688 --rc genhtml_branch_coverage=1 00:29:56.688 --rc genhtml_function_coverage=1 00:29:56.688 --rc genhtml_legend=1 00:29:56.688 --rc geninfo_all_blocks=1 00:29:56.688 --rc geninfo_unexecuted_blocks=1 00:29:56.688 00:29:56.688 ' 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:56.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.688 --rc genhtml_branch_coverage=1 00:29:56.688 --rc genhtml_function_coverage=1 00:29:56.688 --rc genhtml_legend=1 00:29:56.688 --rc geninfo_all_blocks=1 00:29:56.688 --rc geninfo_unexecuted_blocks=1 00:29:56.688 00:29:56.688 ' 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:56.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.688 --rc genhtml_branch_coverage=1 00:29:56.688 --rc genhtml_function_coverage=1 00:29:56.688 --rc genhtml_legend=1 00:29:56.688 --rc geninfo_all_blocks=1 00:29:56.688 --rc geninfo_unexecuted_blocks=1 00:29:56.688 00:29:56.688 ' 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:56.688 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:56.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:56.689 08:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:58.590 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:58.590 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:58.590 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:58.590 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:58.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:58.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:29:58.590 00:29:58.590 --- 10.0.0.2 ping statistics --- 00:29:58.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.590 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:29:58.590 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:58.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:58.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:29:58.590 00:29:58.590 --- 10.0.0.1 ping statistics --- 00:29:58.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.590 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:29:58.848 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:58.848 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:29:58.848 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:58.848 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:58.848 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:58.848 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:58.848 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:58.848 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:58.848 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:58.848 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:58.848 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:58.848 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:58.848 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:58.848 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=834407 00:29:58.848 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:58.848 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 834407 00:29:58.849 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 834407 ']' 00:29:58.849 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.849 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:58.849 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.849 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:58.849 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:58.849 [2024-11-18 08:03:51.756300] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:58.849 [2024-11-18 08:03:51.756387] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.849 [2024-11-18 08:03:51.830597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:58.849 [2024-11-18 08:03:51.876785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.849 [2024-11-18 08:03:51.876859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.849 [2024-11-18 08:03:51.876873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.849 [2024-11-18 08:03:51.876904] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.849 [2024-11-18 08:03:51.876915] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.849 [2024-11-18 08:03:51.878446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.849 [2024-11-18 08:03:51.878527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:58.849 [2024-11-18 08:03:51.878593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:58.849 [2024-11-18 08:03:51.878595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.107 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.107 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:29:59.107 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:59.107 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:59.107 08:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:59.107 08:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.107 08:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:59.107 08:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:02.392 08:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:02.392 08:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:02.392 08:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:02.392 08:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:02.960 08:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:02.960 08:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:02.960 08:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:02.960 08:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:02.960 08:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:02.960 [2024-11-18 08:03:56.014914] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:02.960 08:03:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:03.528 08:03:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:03.528 08:03:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:03.786 08:03:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:03.786 08:03:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:04.044 08:03:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:04.302 [2024-11-18 08:03:57.223271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:04.302 08:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:04.560 08:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:04.560 08:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:04.560 08:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:04.560 08:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:05.938 Initializing NVMe Controllers 00:30:05.938 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:30:05.938 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:30:05.938 Initialization complete. Launching workers. 00:30:05.938 ======================================================== 00:30:05.938 Latency(us) 00:30:05.938 Device Information : IOPS MiB/s Average min max 00:30:05.938 PCIE (0000:88:00.0) NSID 1 from core 0: 85843.07 335.32 372.19 27.53 6304.65 00:30:05.938 ======================================================== 00:30:05.938 Total : 85843.07 335.32 372.19 27.53 6304.65 00:30:05.938 00:30:05.938 08:03:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:07.314 Initializing NVMe Controllers 00:30:07.314 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:07.314 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:07.314 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:07.314 Initialization complete. Launching workers. 00:30:07.314 ======================================================== 00:30:07.314 Latency(us) 00:30:07.314 Device Information : IOPS MiB/s Average min max 00:30:07.314 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 86.70 0.34 11901.35 136.92 45938.63 00:30:07.314 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 48.83 0.19 21131.59 7940.14 50875.37 00:30:07.314 ======================================================== 00:30:07.314 Total : 135.52 0.53 15226.95 136.92 50875.37 00:30:07.314 00:30:07.314 08:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:08.694 Initializing NVMe Controllers 00:30:08.694 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:08.694 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:08.694 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:08.694 Initialization complete. Launching workers. 00:30:08.694 ======================================================== 00:30:08.694 Latency(us) 00:30:08.694 Device Information : IOPS MiB/s Average min max 00:30:08.694 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8503.96 33.22 3763.58 621.35 7513.90 00:30:08.694 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3863.98 15.09 8326.18 6917.48 16115.66 00:30:08.694 ======================================================== 00:30:08.694 Total : 12367.94 48.31 5189.02 621.35 16115.66 00:30:08.694 00:30:08.694 08:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:08.694 08:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:08.694 08:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:11.242 Initializing NVMe Controllers 00:30:11.242 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:11.242 Controller IO queue size 128, less than required. 00:30:11.242 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:11.242 Controller IO queue size 128, less than required. 00:30:11.242 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:11.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:11.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:11.242 Initialization complete. Launching workers. 00:30:11.242 ======================================================== 00:30:11.242 Latency(us) 00:30:11.242 Device Information : IOPS MiB/s Average min max 00:30:11.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1702.04 425.51 75956.88 50446.48 116883.40 00:30:11.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 577.34 144.33 230512.43 80557.39 376477.19 00:30:11.242 ======================================================== 00:30:11.242 Total : 2279.38 569.85 115103.73 50446.48 376477.19 00:30:11.242 00:30:11.242 08:04:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:11.527 No valid NVMe controllers or AIO or URING devices found 00:30:11.527 Initializing NVMe Controllers 00:30:11.527 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:11.527 Controller IO queue size 128, less than required. 00:30:11.527 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:11.527 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:11.527 Controller IO queue size 128, less than required. 00:30:11.527 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:11.527 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:11.527 WARNING: Some requested NVMe devices were skipped 00:30:11.527 08:04:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:14.063 Initializing NVMe Controllers 00:30:14.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:14.063 Controller IO queue size 128, less than required. 00:30:14.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:14.063 Controller IO queue size 128, less than required. 00:30:14.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:14.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:14.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:14.063 Initialization complete. Launching workers. 00:30:14.063 00:30:14.063 ==================== 00:30:14.063 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:14.064 TCP transport: 00:30:14.064 polls: 8780 00:30:14.064 idle_polls: 5529 00:30:14.064 sock_completions: 3251 00:30:14.064 nvme_completions: 6183 00:30:14.064 submitted_requests: 9316 00:30:14.064 queued_requests: 1 00:30:14.064 00:30:14.064 ==================== 00:30:14.064 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:14.064 TCP transport: 00:30:14.064 polls: 12112 00:30:14.064 idle_polls: 8859 00:30:14.064 sock_completions: 3253 00:30:14.064 nvme_completions: 6409 00:30:14.064 submitted_requests: 9710 00:30:14.064 queued_requests: 1 00:30:14.064 ======================================================== 00:30:14.064 Latency(us) 00:30:14.064 Device Information : IOPS MiB/s Average min max 00:30:14.064 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1543.90 385.97 84538.47 57775.89 133494.47 00:30:14.064 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1600.34 400.09 80443.79 46845.51 128791.39 00:30:14.064 ======================================================== 00:30:14.064 Total : 3144.24 786.06 82454.38 46845.51 133494.47 00:30:14.064 00:30:14.064 08:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:14.064 08:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:14.322 08:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:14.322 08:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:30:14.322 08:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:18.516 08:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=aa020a3b-c30e-42f8-8a4c-15d6db596198 00:30:18.516 08:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb aa020a3b-c30e-42f8-8a4c-15d6db596198 00:30:18.516 08:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=aa020a3b-c30e-42f8-8a4c-15d6db596198 00:30:18.516 08:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:18.516 08:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:18.516 08:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:18.516 08:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:18.516 08:04:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:18.516 { 00:30:18.516 "uuid": "aa020a3b-c30e-42f8-8a4c-15d6db596198", 00:30:18.516 "name": "lvs_0", 00:30:18.516 "base_bdev": "Nvme0n1", 00:30:18.516 "total_data_clusters": 238234, 00:30:18.516 "free_clusters": 238234, 00:30:18.516 "block_size": 512, 00:30:18.516 "cluster_size": 4194304 00:30:18.516 } 00:30:18.516 ]' 00:30:18.516 08:04:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="aa020a3b-c30e-42f8-8a4c-15d6db596198") .free_clusters' 00:30:18.516 08:04:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:30:18.516 08:04:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="aa020a3b-c30e-42f8-8a4c-15d6db596198") .cluster_size' 00:30:18.516 08:04:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:18.516 08:04:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:30:18.516 08:04:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:30:18.516 952936 00:30:18.516 08:04:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:18.516 08:04:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:18.516 08:04:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aa020a3b-c30e-42f8-8a4c-15d6db596198 lbd_0 20480 00:30:18.516 08:04:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=1ef93877-e3af-4324-bc52-fdce4d1609ab 00:30:18.516 08:04:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 1ef93877-e3af-4324-bc52-fdce4d1609ab lvs_n_0 00:30:19.453 08:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=52553c3f-38f3-468e-8222-dc13a0439153 00:30:19.453 08:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 52553c3f-38f3-468e-8222-dc13a0439153 00:30:19.453 08:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=52553c3f-38f3-468e-8222-dc13a0439153 00:30:19.453 08:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:19.453 08:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:19.453 08:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:19.453 08:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:19.710 08:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:19.710 { 00:30:19.710 "uuid": "aa020a3b-c30e-42f8-8a4c-15d6db596198", 00:30:19.710 "name": "lvs_0", 00:30:19.710 "base_bdev": "Nvme0n1", 00:30:19.710 "total_data_clusters": 238234, 00:30:19.710 "free_clusters": 233114, 00:30:19.710 "block_size": 512, 00:30:19.710 "cluster_size": 4194304 00:30:19.710 }, 00:30:19.710 { 00:30:19.710 "uuid": "52553c3f-38f3-468e-8222-dc13a0439153", 00:30:19.710 "name": "lvs_n_0", 00:30:19.710 "base_bdev": "1ef93877-e3af-4324-bc52-fdce4d1609ab", 00:30:19.710 "total_data_clusters": 5114, 00:30:19.710 "free_clusters": 5114, 00:30:19.710 "block_size": 512, 00:30:19.710 "cluster_size": 4194304 00:30:19.710 } 00:30:19.710 ]' 00:30:19.710 08:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="52553c3f-38f3-468e-8222-dc13a0439153") .free_clusters' 00:30:19.710 08:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:19.710 08:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="52553c3f-38f3-468e-8222-dc13a0439153") .cluster_size' 00:30:19.710 08:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:19.710 08:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:19.710 08:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:19.710 20456 00:30:19.710 08:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:19.710 08:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 52553c3f-38f3-468e-8222-dc13a0439153 lbd_nest_0 20456 00:30:19.969 08:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=adc55baa-de43-406a-8322-bf5bf0adc304 00:30:19.969 08:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:20.227 08:04:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:20.227 08:04:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 adc55baa-de43-406a-8322-bf5bf0adc304 00:30:20.485 08:04:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:20.743 08:04:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:20.743 08:04:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:20.743 08:04:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:20.743 08:04:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:20.743 08:04:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:32.950 Initializing NVMe Controllers 00:30:32.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:32.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:32.950 Initialization complete. Launching workers. 00:30:32.950 ======================================================== 00:30:32.950 Latency(us) 00:30:32.950 Device Information : IOPS MiB/s Average min max 00:30:32.950 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.19 0.02 22188.67 165.29 45808.56 00:30:32.950 ======================================================== 00:30:32.950 Total : 45.19 0.02 22188.67 165.29 45808.56 00:30:32.950 00:30:32.950 08:04:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:32.950 08:04:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:42.923 Initializing NVMe Controllers 00:30:42.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:42.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:42.923 Initialization complete. Launching workers. 00:30:42.923 ======================================================== 00:30:42.923 Latency(us) 00:30:42.923 Device Information : IOPS MiB/s Average min max 00:30:42.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 73.90 9.24 13548.09 6339.19 47901.21 00:30:42.923 ======================================================== 00:30:42.923 Total : 73.90 9.24 13548.09 6339.19 47901.21 00:30:42.923 00:30:42.923 08:04:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:42.923 08:04:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:42.924 08:04:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:52.885 Initializing NVMe Controllers 00:30:52.885 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:52.885 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:52.885 Initialization complete. Launching workers. 00:30:52.885 ======================================================== 00:30:52.885 Latency(us) 00:30:52.885 Device Information : IOPS MiB/s Average min max 00:30:52.885 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7600.00 3.71 4220.39 446.16 44163.53 00:30:52.885 ======================================================== 00:30:52.885 Total : 7600.00 3.71 4220.39 446.16 44163.53 00:30:52.885 00:30:52.885 08:04:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:52.885 08:04:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:02.861 Initializing NVMe Controllers 00:31:02.861 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:02.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:02.861 Initialization complete. Launching workers. 00:31:02.861 ======================================================== 00:31:02.861 Latency(us) 00:31:02.861 Device Information : IOPS MiB/s Average min max 00:31:02.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3935.49 491.94 8130.66 688.19 16753.07 00:31:02.861 ======================================================== 00:31:02.861 Total : 3935.49 491.94 8130.66 688.19 16753.07 00:31:02.861 00:31:02.861 08:04:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:02.861 08:04:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:02.861 08:04:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:12.843 Initializing NVMe Controllers 00:31:12.843 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:12.843 Controller IO queue size 128, less than required. 00:31:12.843 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:12.843 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:12.843 Initialization complete. Launching workers. 00:31:12.843 ======================================================== 00:31:12.843 Latency(us) 00:31:12.843 Device Information : IOPS MiB/s Average min max 00:31:12.843 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11807.70 5.77 10843.13 1862.73 51352.81 00:31:12.843 ======================================================== 00:31:12.843 Total : 11807.70 5.77 10843.13 1862.73 51352.81 00:31:12.843 00:31:12.843 08:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:12.843 08:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:25.050 Initializing NVMe Controllers 00:31:25.050 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:25.050 Controller IO queue size 128, less than required. 00:31:25.050 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:25.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:25.050 Initialization complete. Launching workers. 00:31:25.050 ======================================================== 00:31:25.050 Latency(us) 00:31:25.050 Device Information : IOPS MiB/s Average min max 00:31:25.050 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1199.50 149.94 107563.16 23630.27 222843.28 00:31:25.050 ======================================================== 00:31:25.050 Total : 1199.50 149.94 107563.16 23630.27 222843.28 00:31:25.050 00:31:25.050 08:05:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:25.050 08:05:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete adc55baa-de43-406a-8322-bf5bf0adc304 00:31:25.050 08:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:25.050 08:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1ef93877-e3af-4324-bc52-fdce4d1609ab 00:31:25.050 08:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:25.309 rmmod nvme_tcp 00:31:25.309 rmmod nvme_fabrics 00:31:25.309 rmmod nvme_keyring 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 834407 ']' 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 834407 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 834407 ']' 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 834407 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 834407 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 834407' 00:31:25.309 killing process with pid 834407 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 834407 00:31:25.309 08:05:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 834407 00:31:27.221 08:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:27.221 08:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:27.221 08:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:27.221 08:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:27.221 08:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:31:27.221 08:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:27.221 08:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:31:27.221 08:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:27.221 08:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:27.221 08:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.221 08:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:27.221 08:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.151 08:05:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:29.151 00:31:29.151 real 1m32.511s 00:31:29.151 user 5m43.809s 00:31:29.151 sys 0m15.345s 00:31:29.151 08:05:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:29.151 08:05:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:29.151 ************************************ 00:31:29.151 END TEST nvmf_perf 00:31:29.151 ************************************ 00:31:29.151 08:05:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:29.151 08:05:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:29.151 08:05:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:29.151 08:05:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.151 ************************************ 00:31:29.151 START TEST nvmf_fio_host 00:31:29.151 ************************************ 00:31:29.151 08:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:29.151 * Looking for test storage... 00:31:29.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:29.151 08:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:29.151 08:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:31:29.151 08:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:29.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.151 --rc genhtml_branch_coverage=1 00:31:29.151 --rc genhtml_function_coverage=1 00:31:29.151 --rc genhtml_legend=1 00:31:29.151 --rc geninfo_all_blocks=1 00:31:29.151 --rc geninfo_unexecuted_blocks=1 00:31:29.151 00:31:29.151 ' 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:29.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.151 --rc genhtml_branch_coverage=1 00:31:29.151 --rc genhtml_function_coverage=1 00:31:29.151 --rc genhtml_legend=1 00:31:29.151 --rc geninfo_all_blocks=1 00:31:29.151 --rc geninfo_unexecuted_blocks=1 00:31:29.151 00:31:29.151 ' 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:29.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.151 --rc genhtml_branch_coverage=1 00:31:29.151 --rc genhtml_function_coverage=1 00:31:29.151 --rc genhtml_legend=1 00:31:29.151 --rc geninfo_all_blocks=1 00:31:29.151 --rc geninfo_unexecuted_blocks=1 00:31:29.151 00:31:29.151 ' 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:29.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.151 --rc genhtml_branch_coverage=1 00:31:29.151 --rc genhtml_function_coverage=1 00:31:29.151 --rc genhtml_legend=1 00:31:29.151 --rc geninfo_all_blocks=1 00:31:29.151 --rc geninfo_unexecuted_blocks=1 00:31:29.151 00:31:29.151 ' 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:29.151 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:29.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:29.152 08:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:31.078 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:31.078 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:31.078 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:31.078 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:31.078 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:31.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:31.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:31:31.337 00:31:31.337 --- 10.0.0.2 ping statistics --- 00:31:31.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.337 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:31.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:31.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:31:31.337 00:31:31.337 --- 10.0.0.1 ping statistics --- 00:31:31.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.337 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=846585 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 846585 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 846585 ']' 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:31.337 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.337 [2024-11-18 08:05:24.284176] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:31:31.337 [2024-11-18 08:05:24.284255] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:31.337 [2024-11-18 08:05:24.357502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:31.337 [2024-11-18 08:05:24.404346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:31.337 [2024-11-18 08:05:24.404398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:31.337 [2024-11-18 08:05:24.404420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:31.337 [2024-11-18 08:05:24.404431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:31.337 [2024-11-18 08:05:24.404440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:31.337 [2024-11-18 08:05:24.405974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.337 [2024-11-18 08:05:24.406030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:31.338 [2024-11-18 08:05:24.406095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:31.338 [2024-11-18 08:05:24.406099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.596 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:31.596 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:31.596 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:31.855 [2024-11-18 08:05:24.780708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:31.855 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:31.855 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:31.855 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.855 08:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:32.114 Malloc1 00:31:32.114 08:05:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:32.372 08:05:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:32.630 08:05:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:32.888 [2024-11-18 08:05:25.915256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.888 08:05:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:33.148 08:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:33.406 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:33.406 fio-3.35 00:31:33.406 Starting 1 thread 00:31:35.937 00:31:35.937 test: (groupid=0, jobs=1): err= 0: pid=846941: Mon Nov 18 08:05:28 2024 00:31:35.937 read: IOPS=9006, BW=35.2MiB/s (36.9MB/s)(70.6MiB/2007msec) 00:31:35.937 slat (nsec): min=1919, max=175151, avg=2436.95, stdev=1921.28 00:31:35.937 clat (usec): min=2563, max=14183, avg=7743.28, stdev=646.44 00:31:35.937 lat (usec): min=2594, max=14186, avg=7745.72, stdev=646.31 00:31:35.937 clat percentiles (usec): 00:31:35.937 | 1.00th=[ 6259], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7242], 00:31:35.937 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 7898], 00:31:35.937 | 70.00th=[ 8094], 80.00th=[ 8291], 90.00th=[ 8455], 95.00th=[ 8717], 00:31:35.937 | 99.00th=[ 9110], 99.50th=[ 9372], 99.90th=[11600], 99.95th=[12649], 00:31:35.937 | 99.99th=[14091] 00:31:35.937 bw ( KiB/s): min=34816, max=36520, per=99.94%, avg=36006.00, stdev=809.37, samples=4 00:31:35.937 iops : min= 8704, max= 9130, avg=9001.50, stdev=202.34, samples=4 00:31:35.937 write: IOPS=9024, BW=35.3MiB/s (37.0MB/s)(70.8MiB/2007msec); 0 zone resets 00:31:35.937 slat (usec): min=2, max=130, avg= 2.56, stdev= 1.41 00:31:35.937 clat (usec): min=1428, max=12740, avg=6400.31, stdev=535.26 00:31:35.937 lat (usec): min=1436, max=12742, avg=6402.87, stdev=535.20 00:31:35.937 clat percentiles (usec): 00:31:35.937 | 1.00th=[ 5145], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 5997], 00:31:35.937 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:31:35.937 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7177], 00:31:35.937 | 99.00th=[ 7570], 99.50th=[ 7701], 99.90th=[10683], 99.95th=[11600], 00:31:35.937 | 99.99th=[12649] 00:31:35.937 bw ( KiB/s): min=35672, max=36480, per=100.00%, avg=36118.00, stdev=423.59, samples=4 00:31:35.937 iops : min= 8918, max= 9120, avg=9029.50, stdev=105.90, samples=4 00:31:35.937 lat (msec) : 2=0.03%, 4=0.12%, 10=99.71%, 20=0.15% 00:31:35.937 cpu : usr=64.71%, sys=33.70%, ctx=89, majf=0, minf=36 00:31:35.937 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:35.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:35.937 issued rwts: total=18076,18113,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:35.937 00:31:35.937 Run status group 0 (all jobs): 00:31:35.937 READ: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=70.6MiB (74.0MB), run=2007-2007msec 00:31:35.937 WRITE: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=70.8MiB (74.2MB), run=2007-2007msec 00:31:35.937 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:35.937 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:35.937 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:35.937 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:35.937 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:35.937 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:35.937 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:35.937 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:35.937 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:35.937 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:35.937 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:35.937 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:35.937 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:35.937 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:35.937 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:35.937 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:35.937 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:35.937 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:35.937 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:35.937 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:35.937 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:35.938 08:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:36.196 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:36.196 fio-3.35 00:31:36.196 Starting 1 thread 00:31:38.727 00:31:38.728 test: (groupid=0, jobs=1): err= 0: pid=847335: Mon Nov 18 08:05:31 2024 00:31:38.728 read: IOPS=8239, BW=129MiB/s (135MB/s)(258MiB/2007msec) 00:31:38.728 slat (usec): min=2, max=103, avg= 3.80, stdev= 1.85 00:31:38.728 clat (usec): min=2259, max=17970, avg=9015.83, stdev=2183.64 00:31:38.728 lat (usec): min=2263, max=17974, avg=9019.63, stdev=2183.88 00:31:38.728 clat percentiles (usec): 00:31:38.728 | 1.00th=[ 4686], 5.00th=[ 5604], 10.00th=[ 6390], 20.00th=[ 7177], 00:31:38.728 | 30.00th=[ 7767], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9503], 00:31:38.728 | 70.00th=[10159], 80.00th=[10814], 90.00th=[11731], 95.00th=[12518], 00:31:38.728 | 99.00th=[15533], 99.50th=[16057], 99.90th=[17171], 99.95th=[17171], 00:31:38.728 | 99.99th=[17171] 00:31:38.728 bw ( KiB/s): min=60192, max=77216, per=51.37%, avg=67720.00, stdev=7905.72, samples=4 00:31:38.728 iops : min= 3762, max= 4826, avg=4232.50, stdev=494.11, samples=4 00:31:38.728 write: IOPS=4818, BW=75.3MiB/s (78.9MB/s)(139MiB/1841msec); 0 zone resets 00:31:38.728 slat (usec): min=30, max=283, avg=34.01, stdev= 7.70 00:31:38.728 clat (usec): min=5425, max=21628, avg=11475.57, stdev=2068.68 00:31:38.728 lat (usec): min=5472, max=21689, avg=11509.58, stdev=2070.46 00:31:38.728 clat percentiles (usec): 00:31:38.728 | 1.00th=[ 7439], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9765], 00:31:38.728 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11207], 60.00th=[11731], 00:31:38.728 | 70.00th=[12387], 80.00th=[13173], 90.00th=[14353], 95.00th=[15139], 00:31:38.728 | 99.00th=[16909], 99.50th=[17433], 99.90th=[19006], 99.95th=[19792], 00:31:38.728 | 99.99th=[21627] 00:31:38.728 bw ( KiB/s): min=63360, max=80768, per=91.48%, avg=70528.00, stdev=8552.76, samples=4 00:31:38.728 iops : min= 3960, max= 5048, avg=4408.00, stdev=534.55, samples=4 00:31:38.728 lat (msec) : 4=0.15%, 10=52.55%, 20=47.29%, 50=0.01% 00:31:38.728 cpu : usr=77.87%, sys=20.84%, ctx=32, majf=0, minf=54 00:31:38.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:38.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:38.728 issued rwts: total=16536,8871,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:38.728 00:31:38.728 Run status group 0 (all jobs): 00:31:38.728 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=258MiB (271MB), run=2007-2007msec 00:31:38.728 WRITE: bw=75.3MiB/s (78.9MB/s), 75.3MiB/s-75.3MiB/s (78.9MB/s-78.9MB/s), io=139MiB (145MB), run=1841-1841msec 00:31:38.728 08:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:38.728 08:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:38.728 08:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:38.728 08:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:38.728 08:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:38.728 08:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:31:38.728 08:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:38.728 08:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:38.728 08:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:38.728 08:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:38.728 08:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:31:38.728 08:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:31:42.017 Nvme0n1 00:31:42.017 08:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:45.300 08:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=b524cd64-2be7-4a7f-a49d-189ec6cae97e 00:31:45.300 08:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb b524cd64-2be7-4a7f-a49d-189ec6cae97e 00:31:45.300 08:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=b524cd64-2be7-4a7f-a49d-189ec6cae97e 00:31:45.300 08:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:45.300 08:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:45.300 08:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:45.300 08:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:45.300 08:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:45.300 { 00:31:45.300 "uuid": "b524cd64-2be7-4a7f-a49d-189ec6cae97e", 00:31:45.300 "name": "lvs_0", 00:31:45.300 "base_bdev": "Nvme0n1", 00:31:45.300 "total_data_clusters": 930, 00:31:45.300 "free_clusters": 930, 00:31:45.300 "block_size": 512, 00:31:45.300 "cluster_size": 1073741824 00:31:45.300 } 00:31:45.300 ]' 00:31:45.300 08:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="b524cd64-2be7-4a7f-a49d-189ec6cae97e") .free_clusters' 00:31:45.300 08:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:31:45.300 08:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="b524cd64-2be7-4a7f-a49d-189ec6cae97e") .cluster_size' 00:31:45.300 08:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:31:45.300 08:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:31:45.300 08:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:31:45.300 952320 00:31:45.300 08:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:45.558 230571be-1f8d-40ff-9ad7-81d222f919a7 00:31:45.558 08:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:45.816 08:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:46.074 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:46.331 08:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:46.589 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:46.589 fio-3.35 00:31:46.589 Starting 1 thread 00:31:49.122 00:31:49.122 test: (groupid=0, jobs=1): err= 0: pid=848673: Mon Nov 18 08:05:42 2024 00:31:49.122 read: IOPS=6037, BW=23.6MiB/s (24.7MB/s)(47.4MiB/2009msec) 00:31:49.122 slat (nsec): min=1985, max=173361, avg=2621.91, stdev=2589.40 00:31:49.122 clat (usec): min=867, max=171307, avg=11596.97, stdev=11617.34 00:31:49.122 lat (usec): min=870, max=171352, avg=11599.60, stdev=11617.68 00:31:49.122 clat percentiles (msec): 00:31:49.122 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:31:49.122 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:31:49.122 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:31:49.122 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:31:49.122 | 99.99th=[ 171] 00:31:49.122 bw ( KiB/s): min=16776, max=26616, per=99.87%, avg=24118.00, stdev=4894.85, samples=4 00:31:49.122 iops : min= 4194, max= 6654, avg=6029.50, stdev=1223.71, samples=4 00:31:49.122 write: IOPS=6019, BW=23.5MiB/s (24.7MB/s)(47.2MiB/2009msec); 0 zone resets 00:31:49.122 slat (usec): min=2, max=148, avg= 2.76, stdev= 1.99 00:31:49.122 clat (usec): min=331, max=169369, avg=9459.26, stdev=10905.08 00:31:49.122 lat (usec): min=334, max=169376, avg=9462.02, stdev=10905.43 00:31:49.122 clat percentiles (msec): 00:31:49.122 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:31:49.122 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:31:49.122 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:31:49.122 | 99.00th=[ 11], 99.50th=[ 17], 99.90th=[ 169], 99.95th=[ 169], 00:31:49.122 | 99.99th=[ 169] 00:31:49.122 bw ( KiB/s): min=17768, max=26304, per=99.98%, avg=24074.00, stdev=4206.27, samples=4 00:31:49.122 iops : min= 4442, max= 6576, avg=6018.50, stdev=1051.57, samples=4 00:31:49.122 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:31:49.122 lat (msec) : 2=0.03%, 4=0.12%, 10=57.89%, 20=41.41%, 250=0.53% 00:31:49.122 cpu : usr=64.39%, sys=34.26%, ctx=87, majf=0, minf=36 00:31:49.122 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:49.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:49.122 issued rwts: total=12129,12094,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.122 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:49.122 00:31:49.122 Run status group 0 (all jobs): 00:31:49.122 READ: bw=23.6MiB/s (24.7MB/s), 23.6MiB/s-23.6MiB/s (24.7MB/s-24.7MB/s), io=47.4MiB (49.7MB), run=2009-2009msec 00:31:49.122 WRITE: bw=23.5MiB/s (24.7MB/s), 23.5MiB/s-23.5MiB/s (24.7MB/s-24.7MB/s), io=47.2MiB (49.5MB), run=2009-2009msec 00:31:49.122 08:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:49.381 08:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:50.761 08:05:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=80a005b6-5ebd-4958-9630-b9bd2bab66e9 00:31:50.761 08:05:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 80a005b6-5ebd-4958-9630-b9bd2bab66e9 00:31:50.761 08:05:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=80a005b6-5ebd-4958-9630-b9bd2bab66e9 00:31:50.761 08:05:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:50.761 08:05:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:50.761 08:05:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:50.761 08:05:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:50.761 08:05:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:50.761 { 00:31:50.761 "uuid": "b524cd64-2be7-4a7f-a49d-189ec6cae97e", 00:31:50.761 "name": "lvs_0", 00:31:50.761 "base_bdev": "Nvme0n1", 00:31:50.761 "total_data_clusters": 930, 00:31:50.761 "free_clusters": 0, 00:31:50.761 "block_size": 512, 00:31:50.761 "cluster_size": 1073741824 00:31:50.761 }, 00:31:50.761 { 00:31:50.761 "uuid": "80a005b6-5ebd-4958-9630-b9bd2bab66e9", 00:31:50.761 "name": "lvs_n_0", 00:31:50.761 "base_bdev": "230571be-1f8d-40ff-9ad7-81d222f919a7", 00:31:50.761 "total_data_clusters": 237847, 00:31:50.761 "free_clusters": 237847, 00:31:50.761 "block_size": 512, 00:31:50.761 "cluster_size": 4194304 00:31:50.761 } 00:31:50.761 ]' 00:31:50.761 08:05:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="80a005b6-5ebd-4958-9630-b9bd2bab66e9") .free_clusters' 00:31:51.019 08:05:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:31:51.019 08:05:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="80a005b6-5ebd-4958-9630-b9bd2bab66e9") .cluster_size' 00:31:51.019 08:05:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:51.019 08:05:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:31:51.019 08:05:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:31:51.019 951388 00:31:51.019 08:05:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:51.585 597d957b-93c0-401c-bbbf-4d849f74135e 00:31:51.585 08:05:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:51.844 08:05:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:52.102 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:52.360 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:52.360 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:52.360 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:52.360 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:52.360 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:52.360 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:52.360 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:52.360 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:52.360 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:52.360 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:52.360 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:52.360 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:52.361 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:52.361 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:52.361 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:52.361 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:52.361 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:52.361 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:52.361 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:52.361 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:52.361 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:52.361 08:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:52.619 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:52.619 fio-3.35 00:31:52.619 Starting 1 thread 00:31:55.150 00:31:55.150 test: (groupid=0, jobs=1): err= 0: pid=849411: Mon Nov 18 08:05:48 2024 00:31:55.150 read: IOPS=5776, BW=22.6MiB/s (23.7MB/s)(45.3MiB/2009msec) 00:31:55.150 slat (nsec): min=1905, max=135337, avg=2427.72, stdev=1955.73 00:31:55.150 clat (usec): min=4560, max=20854, avg=12094.76, stdev=1138.59 00:31:55.150 lat (usec): min=4566, max=20856, avg=12097.19, stdev=1138.50 00:31:55.150 clat percentiles (usec): 00:31:55.150 | 1.00th=[ 9634], 5.00th=[10421], 10.00th=[10683], 20.00th=[11207], 00:31:55.150 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:31:55.150 | 70.00th=[12649], 80.00th=[13042], 90.00th=[13435], 95.00th=[13829], 00:31:55.150 | 99.00th=[14615], 99.50th=[14877], 99.90th=[19530], 99.95th=[20579], 00:31:55.150 | 99.99th=[20841] 00:31:55.150 bw ( KiB/s): min=21952, max=23600, per=99.78%, avg=23056.00, stdev=752.85, samples=4 00:31:55.150 iops : min= 5488, max= 5900, avg=5764.00, stdev=188.21, samples=4 00:31:55.150 write: IOPS=5760, BW=22.5MiB/s (23.6MB/s)(45.2MiB/2009msec); 0 zone resets 00:31:55.150 slat (usec): min=2, max=117, avg= 2.55, stdev= 1.55 00:31:55.150 clat (usec): min=2163, max=18418, avg=9953.94, stdev=915.71 00:31:55.150 lat (usec): min=2169, max=18420, avg=9956.49, stdev=915.68 00:31:55.150 clat percentiles (usec): 00:31:55.150 | 1.00th=[ 7832], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:31:55.150 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:31:55.150 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11338], 00:31:55.150 | 99.00th=[11863], 99.50th=[12256], 99.90th=[16188], 99.95th=[17433], 00:31:55.150 | 99.99th=[18220] 00:31:55.150 bw ( KiB/s): min=22944, max=23168, per=100.00%, avg=23046.00, stdev=107.60, samples=4 00:31:55.150 iops : min= 5736, max= 5792, avg=5761.50, stdev=26.90, samples=4 00:31:55.150 lat (msec) : 4=0.05%, 10=27.22%, 20=72.69%, 50=0.05% 00:31:55.150 cpu : usr=64.14%, sys=34.56%, ctx=143, majf=0, minf=36 00:31:55.150 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:55.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:55.150 issued rwts: total=11605,11573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:55.150 00:31:55.150 Run status group 0 (all jobs): 00:31:55.150 READ: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.3MiB (47.5MB), run=2009-2009msec 00:31:55.150 WRITE: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.2MiB (47.4MB), run=2009-2009msec 00:31:55.150 08:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:55.409 08:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:55.409 08:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:59.607 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:59.607 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:02.897 08:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:02.897 08:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:04.797 rmmod nvme_tcp 00:32:04.797 rmmod nvme_fabrics 00:32:04.797 rmmod nvme_keyring 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 846585 ']' 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 846585 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 846585 ']' 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 846585 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 846585 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 846585' 00:32:04.797 killing process with pid 846585 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 846585 00:32:04.797 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 846585 00:32:05.056 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:05.056 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:05.056 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:05.056 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:32:05.056 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:32:05.056 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:05.056 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:32:05.056 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:05.056 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:05.056 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.056 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:05.056 08:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.964 08:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:06.964 00:32:06.964 real 0m38.093s 00:32:06.964 user 2m27.333s 00:32:06.964 sys 0m6.700s 00:32:06.964 08:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:06.964 08:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.964 ************************************ 00:32:06.964 END TEST nvmf_fio_host 00:32:06.964 ************************************ 00:32:06.964 08:05:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:06.964 08:05:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:06.964 08:05:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:06.964 08:05:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.964 ************************************ 00:32:06.964 START TEST nvmf_failover 00:32:06.964 ************************************ 00:32:06.965 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:07.224 * Looking for test storage... 00:32:07.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:07.224 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:07.224 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:32:07.224 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:07.224 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:07.224 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:07.224 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:07.224 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:07.224 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:07.224 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:07.224 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:07.224 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:07.224 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:07.224 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:07.224 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:07.224 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:07.224 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:07.224 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:07.224 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:07.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.225 --rc genhtml_branch_coverage=1 00:32:07.225 --rc genhtml_function_coverage=1 00:32:07.225 --rc genhtml_legend=1 00:32:07.225 --rc geninfo_all_blocks=1 00:32:07.225 --rc geninfo_unexecuted_blocks=1 00:32:07.225 00:32:07.225 ' 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:07.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.225 --rc genhtml_branch_coverage=1 00:32:07.225 --rc genhtml_function_coverage=1 00:32:07.225 --rc genhtml_legend=1 00:32:07.225 --rc geninfo_all_blocks=1 00:32:07.225 --rc geninfo_unexecuted_blocks=1 00:32:07.225 00:32:07.225 ' 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:07.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.225 --rc genhtml_branch_coverage=1 00:32:07.225 --rc genhtml_function_coverage=1 00:32:07.225 --rc genhtml_legend=1 00:32:07.225 --rc geninfo_all_blocks=1 00:32:07.225 --rc geninfo_unexecuted_blocks=1 00:32:07.225 00:32:07.225 ' 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:07.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.225 --rc genhtml_branch_coverage=1 00:32:07.225 --rc genhtml_function_coverage=1 00:32:07.225 --rc genhtml_legend=1 00:32:07.225 --rc geninfo_all_blocks=1 00:32:07.225 --rc geninfo_unexecuted_blocks=1 00:32:07.225 00:32:07.225 ' 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:07.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:07.225 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.226 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:07.226 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:07.226 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:07.226 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:07.226 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:07.226 08:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:09.131 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:09.131 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:09.131 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:09.131 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:09.131 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:09.131 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:09.131 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:09.131 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:09.131 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:09.131 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:09.131 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:09.131 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:09.131 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:09.131 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:09.131 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:09.131 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:09.131 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:09.131 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:09.132 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:09.132 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:09.132 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:09.132 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:09.132 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:09.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:09.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:32:09.391 00:32:09.391 --- 10.0.0.2 ping statistics --- 00:32:09.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.391 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:09.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:09.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:32:09.391 00:32:09.391 --- 10.0.0.1 ping statistics --- 00:32:09.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.391 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=852892 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 852892 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 852892 ']' 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:09.391 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:09.391 [2024-11-18 08:06:02.418551] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:32:09.391 [2024-11-18 08:06:02.418641] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.695 [2024-11-18 08:06:02.494149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:09.695 [2024-11-18 08:06:02.541762] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.695 [2024-11-18 08:06:02.541814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.695 [2024-11-18 08:06:02.541844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:09.695 [2024-11-18 08:06:02.541855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:09.695 [2024-11-18 08:06:02.541865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.695 [2024-11-18 08:06:02.543364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:09.695 [2024-11-18 08:06:02.543423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.695 [2024-11-18 08:06:02.543420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:09.695 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:09.695 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:09.695 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:09.695 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:09.695 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:09.696 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:09.696 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:09.976 [2024-11-18 08:06:02.940114] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:09.976 08:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:10.234 Malloc0 00:32:10.234 08:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:10.801 08:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:11.059 08:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:11.318 [2024-11-18 08:06:04.189188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:11.318 08:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:11.576 [2024-11-18 08:06:04.466021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:11.576 08:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:11.835 [2024-11-18 08:06:04.742989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:11.835 08:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=853190 00:32:11.835 08:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:11.835 08:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:11.835 08:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 853190 /var/tmp/bdevperf.sock 00:32:11.835 08:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 853190 ']' 00:32:11.835 08:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:11.835 08:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:11.835 08:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:11.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:11.835 08:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:11.835 08:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:12.093 08:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:12.093 08:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:12.093 08:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:12.351 NVMe0n1 00:32:12.351 08:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:12.919 00:32:12.920 08:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=853325 00:32:12.920 08:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:12.920 08:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:13.854 08:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:14.112 [2024-11-18 08:06:07.107302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb800 is same with the state(6) to be set 00:32:14.112 [2024-11-18 08:06:07.107377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb800 is same with the state(6) to be set 00:32:14.112 [2024-11-18 08:06:07.107393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb800 is same with the state(6) to be set 00:32:14.112 [2024-11-18 08:06:07.107415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb800 is same with the state(6) to be set 00:32:14.112 [2024-11-18 08:06:07.107428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb800 is same with the state(6) to be set 00:32:14.112 [2024-11-18 08:06:07.107441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb800 is same with the state(6) to be set 00:32:14.112 [2024-11-18 08:06:07.107453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb800 is same with the state(6) to be set 00:32:14.112 08:06:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:17.398 08:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:17.656 00:32:17.657 08:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:17.916 [2024-11-18 08:06:10.878705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec650 is same with the state(6) to be set 00:32:17.916 [2024-11-18 08:06:10.878744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec650 is same with the state(6) to be set 00:32:17.916 [2024-11-18 08:06:10.878759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec650 is same with the state(6) to be set 00:32:17.916 [2024-11-18 08:06:10.878773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec650 is same with the state(6) to be set 00:32:17.916 [2024-11-18 08:06:10.878785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec650 is same with the state(6) to be set 00:32:17.916 [2024-11-18 08:06:10.878798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec650 is same with the state(6) to be set 00:32:17.916 [2024-11-18 08:06:10.878811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec650 is same with the state(6) to be set 00:32:17.916 [2024-11-18 08:06:10.878823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec650 is same with the state(6) to be set 00:32:17.916 [2024-11-18 08:06:10.878835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec650 is same with the state(6) to be set 00:32:17.916 [2024-11-18 08:06:10.878848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec650 is same with the state(6) to be set 00:32:17.916 [2024-11-18 08:06:10.878860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec650 is same with the state(6) to be set 00:32:17.916 [2024-11-18 08:06:10.878872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec650 is same with the state(6) to be set 00:32:17.916 [2024-11-18 08:06:10.878886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec650 is same with the state(6) to be set 00:32:17.916 [2024-11-18 08:06:10.878898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec650 is same with the state(6) to be set 00:32:17.916 [2024-11-18 08:06:10.878910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec650 is same with the state(6) to be set 00:32:17.916 08:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:21.209 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:21.209 [2024-11-18 08:06:14.148249] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:21.209 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:22.145 08:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:22.403 [2024-11-18 08:06:15.478696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.478762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.478777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.478797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.478810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.478823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.478835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.478848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.478860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.478872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.478884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.478904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.478916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.478929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.478941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.478954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.478972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.478984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.478998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.479010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.479022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.479045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.479058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.479070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.479083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.479095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.403 [2024-11-18 08:06:15.479108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.404 [2024-11-18 08:06:15.479780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed570 is same with the state(6) to be set 00:32:22.664 08:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 853325 00:32:27.932 { 00:32:27.932 "results": [ 00:32:27.932 { 00:32:27.932 "job": "NVMe0n1", 00:32:27.932 "core_mask": "0x1", 00:32:27.932 "workload": "verify", 00:32:27.932 "status": "finished", 00:32:27.932 "verify_range": { 00:32:27.932 "start": 0, 00:32:27.932 "length": 16384 00:32:27.932 }, 00:32:27.932 "queue_depth": 128, 00:32:27.932 "io_size": 4096, 00:32:27.932 "runtime": 15.015316, 00:32:27.932 "iops": 8505.382104512486, 00:32:27.932 "mibps": 33.2241488457519, 00:32:27.932 "io_failed": 10181, 00:32:27.932 "io_timeout": 0, 00:32:27.932 "avg_latency_us": 13910.998947410266, 00:32:27.932 "min_latency_us": 552.2014814814814, 00:32:27.932 "max_latency_us": 18641.35111111111 00:32:27.932 } 00:32:27.932 ], 00:32:27.932 "core_count": 1 00:32:27.932 } 00:32:27.932 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 853190 00:32:28.197 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 853190 ']' 00:32:28.197 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 853190 00:32:28.197 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:28.197 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:28.197 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 853190 00:32:28.197 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:28.197 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:28.197 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 853190' 00:32:28.197 killing process with pid 853190 00:32:28.197 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 853190 00:32:28.197 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 853190 00:32:28.197 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:28.197 [2024-11-18 08:06:04.811347] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:32:28.197 [2024-11-18 08:06:04.811430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid853190 ] 00:32:28.197 [2024-11-18 08:06:04.881304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.197 [2024-11-18 08:06:04.928713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.197 Running I/O for 15 seconds... 00:32:28.197 8637.00 IOPS, 33.74 MiB/s [2024-11-18T07:06:21.285Z] [2024-11-18 08:06:07.107792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.198 [2024-11-18 08:06:07.107837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.107866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.198 [2024-11-18 08:06:07.107884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.107902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.107917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.107933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.107947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.107963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.107978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.107994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.108977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.108992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.109006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.109021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.198 [2024-11-18 08:06:07.109035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.198 [2024-11-18 08:06:07.109051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.199 [2024-11-18 08:06:07.109861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.199 [2024-11-18 08:06:07.109890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.199 [2024-11-18 08:06:07.109921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.199 [2024-11-18 08:06:07.109951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.199 [2024-11-18 08:06:07.109980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.109996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.199 [2024-11-18 08:06:07.110010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.110024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.199 [2024-11-18 08:06:07.110038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.110054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.199 [2024-11-18 08:06:07.110068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.110083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.199 [2024-11-18 08:06:07.110097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.110116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.199 [2024-11-18 08:06:07.110131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.110147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.199 [2024-11-18 08:06:07.110161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.110176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.199 [2024-11-18 08:06:07.110190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.110211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.199 [2024-11-18 08:06:07.110226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.110242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.199 [2024-11-18 08:06:07.110256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.199 [2024-11-18 08:06:07.110271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.110977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.110991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.111006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.111020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.111035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.111049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.111064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.111078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.111093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.111107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.111122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.111136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.111151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.111166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.111186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.111201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.111216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.111230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.111246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.111260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.111278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.200 [2024-11-18 08:06:07.111293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.111309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.200 [2024-11-18 08:06:07.111323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.111338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.200 [2024-11-18 08:06:07.111352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.111372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.200 [2024-11-18 08:06:07.111387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.111402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.200 [2024-11-18 08:06:07.111417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.111432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.200 [2024-11-18 08:06:07.111446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.200 [2024-11-18 08:06:07.111461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.200 [2024-11-18 08:06:07.111474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:07.111496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.201 [2024-11-18 08:06:07.111513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:07.111528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:07.111542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:07.111557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:07.111571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:07.111586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:07.111600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:07.111615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:07.111628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:07.111644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:07.111662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:07.111684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:07.111699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:07.111714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:07.111728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:07.111743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc797b0 is same with the state(6) to be set 00:32:28.201 [2024-11-18 08:06:07.111762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:28.201 [2024-11-18 08:06:07.111774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:28.201 [2024-11-18 08:06:07.111785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80856 len:8 PRP1 0x0 PRP2 0x0 00:32:28.201 [2024-11-18 08:06:07.111798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:07.111879] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:28.201 [2024-11-18 08:06:07.111921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.201 [2024-11-18 08:06:07.111940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:07.111961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.201 [2024-11-18 08:06:07.111975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:07.111989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.201 [2024-11-18 08:06:07.112002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:07.112016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.201 [2024-11-18 08:06:07.112030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:07.112043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:28.201 [2024-11-18 08:06:07.115283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:28.201 [2024-11-18 08:06:07.115325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc58890 (9): Bad file descriptor 00:32:28.201 [2024-11-18 08:06:07.300308] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:28.201 7789.00 IOPS, 30.43 MiB/s [2024-11-18T07:06:21.289Z] 8055.67 IOPS, 31.47 MiB/s [2024-11-18T07:06:21.289Z] 8226.00 IOPS, 32.13 MiB/s [2024-11-18T07:06:21.289Z] [2024-11-18 08:06:10.877158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.201 [2024-11-18 08:06:10.877250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:10.877270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.201 [2024-11-18 08:06:10.877295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:10.877310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.201 [2024-11-18 08:06:10.877323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:10.877337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.201 [2024-11-18 08:06:10.877350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:10.877363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58890 is same with the state(6) to be set 00:32:28.201 [2024-11-18 08:06:10.879329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:10.879354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:10.879378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:10.879394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:10.879411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:10.879425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:10.879440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:10.879454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:10.879469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:10.879484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:10.879528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:10.879544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:10.879560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:10.879574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:10.879589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:10.879604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:10.879619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:10.879634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:10.879649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:10.879669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:10.879685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:10.879700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:10.879715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:10.879729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:10.879744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:10.879758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:10.879774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:10.879797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:10.879828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:10.879842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:10.879856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:10.879870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:10.879885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:10.879899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.201 [2024-11-18 08:06:10.879914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.201 [2024-11-18 08:06:10.879927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.879942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.879956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.879970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.879984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.879998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.880012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.880041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.880073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.880102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.880131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.880159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.880188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.880216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.880244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.880273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.880301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.880331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.880360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.880389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.880417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.880449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.880478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.880532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.880562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.202 [2024-11-18 08:06:10.880591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.202 [2024-11-18 08:06:10.880620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.202 [2024-11-18 08:06:10.880650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.202 [2024-11-18 08:06:10.880679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.202 [2024-11-18 08:06:10.880708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.202 [2024-11-18 08:06:10.880738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.202 [2024-11-18 08:06:10.880767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:124440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.202 [2024-11-18 08:06:10.880799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.202 [2024-11-18 08:06:10.880849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.202 [2024-11-18 08:06:10.880879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.202 [2024-11-18 08:06:10.880894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.202 [2024-11-18 08:06:10.880908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.880922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:124472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.880936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.880951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.880964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.880979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.880993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:124528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:124560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:124592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:124616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:124624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.881844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.203 [2024-11-18 08:06:10.881873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.203 [2024-11-18 08:06:10.881920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.203 [2024-11-18 08:06:10.881949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.203 [2024-11-18 08:06:10.881981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.881997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.203 [2024-11-18 08:06:10.882011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.882030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.203 [2024-11-18 08:06:10.882045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.882060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.203 [2024-11-18 08:06:10.882074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.882090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.203 [2024-11-18 08:06:10.882104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.882119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.203 [2024-11-18 08:06:10.882133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.203 [2024-11-18 08:06:10.882148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:124872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.204 [2024-11-18 08:06:10.882858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.204 [2024-11-18 08:06:10.882888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.204 [2024-11-18 08:06:10.882918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.204 [2024-11-18 08:06:10.882947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.204 [2024-11-18 08:06:10.882977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.882991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.204 [2024-11-18 08:06:10.883005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.883021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.204 [2024-11-18 08:06:10.883036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.883051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.204 [2024-11-18 08:06:10.883065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.883080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.204 [2024-11-18 08:06:10.883094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.883109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.204 [2024-11-18 08:06:10.883123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.883139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.204 [2024-11-18 08:06:10.883152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.883167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.204 [2024-11-18 08:06:10.883182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.883201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.204 [2024-11-18 08:06:10.883216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.883231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.204 [2024-11-18 08:06:10.883245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.883260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.204 [2024-11-18 08:06:10.883275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.883290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.204 [2024-11-18 08:06:10.883305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.883335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:28.204 [2024-11-18 08:06:10.883352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:28.204 [2024-11-18 08:06:10.883365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125408 len:8 PRP1 0x0 PRP2 0x0 00:32:28.204 [2024-11-18 08:06:10.883378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.204 [2024-11-18 08:06:10.883443] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:28.205 [2024-11-18 08:06:10.883463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:28.205 [2024-11-18 08:06:10.886743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:28.205 [2024-11-18 08:06:10.886797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc58890 (9): Bad file descriptor 00:32:28.205 [2024-11-18 08:06:10.912922] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:28.205 8215.40 IOPS, 32.09 MiB/s [2024-11-18T07:06:21.293Z] 8287.33 IOPS, 32.37 MiB/s [2024-11-18T07:06:21.293Z] 8352.86 IOPS, 32.63 MiB/s [2024-11-18T07:06:21.293Z] 8393.88 IOPS, 32.79 MiB/s [2024-11-18T07:06:21.293Z] 8420.00 IOPS, 32.89 MiB/s [2024-11-18T07:06:21.293Z] [2024-11-18 08:06:15.479940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:55984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.479984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:56016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:56056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:56072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:56104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:56120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:56136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:56144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:56152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:56208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:56216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.480982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.480997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.481013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.481027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.481042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.481056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.481071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.481086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.205 [2024-11-18 08:06:15.481101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.205 [2024-11-18 08:06:15.481115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.206 [2024-11-18 08:06:15.481144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.206 [2024-11-18 08:06:15.481173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.206 [2024-11-18 08:06:15.481202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.206 [2024-11-18 08:06:15.481231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:56312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.206 [2024-11-18 08:06:15.481261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.206 [2024-11-18 08:06:15.481294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.206 [2024-11-18 08:06:15.481323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.206 [2024-11-18 08:06:15.481353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.206 [2024-11-18 08:06:15.481381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.206 [2024-11-18 08:06:15.481411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.206 [2024-11-18 08:06:15.481440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.206 [2024-11-18 08:06:15.481470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.206 [2024-11-18 08:06:15.481511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.206 [2024-11-18 08:06:15.481546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.206 [2024-11-18 08:06:15.481582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.206 [2024-11-18 08:06:15.481611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.206 [2024-11-18 08:06:15.481640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.206 [2024-11-18 08:06:15.481674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.206 [2024-11-18 08:06:15.481705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.206 [2024-11-18 08:06:15.481734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.206 [2024-11-18 08:06:15.481763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.206 [2024-11-18 08:06:15.481792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.206 [2024-11-18 08:06:15.481822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.206 [2024-11-18 08:06:15.481851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.206 [2024-11-18 08:06:15.481880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.206 [2024-11-18 08:06:15.481909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.206 [2024-11-18 08:06:15.481939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.206 [2024-11-18 08:06:15.481968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.481983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.206 [2024-11-18 08:06:15.481998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.482013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.206 [2024-11-18 08:06:15.482027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.482051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.206 [2024-11-18 08:06:15.482066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.482082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.206 [2024-11-18 08:06:15.482096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.482111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.206 [2024-11-18 08:06:15.482125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.482140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:56424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.206 [2024-11-18 08:06:15.482154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.206 [2024-11-18 08:06:15.482169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:56432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.206 [2024-11-18 08:06:15.482183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.207 [2024-11-18 08:06:15.482213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.207 [2024-11-18 08:06:15.482242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:56456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.207 [2024-11-18 08:06:15.482270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.207 [2024-11-18 08:06:15.482300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.207 [2024-11-18 08:06:15.482330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.207 [2024-11-18 08:06:15.482359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.482388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.482421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.482451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.482481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.482520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.482561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:56656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.482590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.482619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:56672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.482648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.482676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.482705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:56696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.482735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:56704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.482764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.482796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.482829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.482858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.482887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.482916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.482952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:56760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.482982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.482997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:56768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.483011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.483025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:56776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.483040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.483055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.483069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.483083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.483097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.483112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.483126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.483141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.483155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.483170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.483184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.483203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:56824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.483217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.483232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.483246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.483261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.483276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.483291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.483305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.483321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.483335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.483350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.483365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.207 [2024-11-18 08:06:15.483380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:56872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.207 [2024-11-18 08:06:15.483394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.208 [2024-11-18 08:06:15.483409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.208 [2024-11-18 08:06:15.483428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.208 [2024-11-18 08:06:15.483443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.208 [2024-11-18 08:06:15.483457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.208 [2024-11-18 08:06:15.483472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.208 [2024-11-18 08:06:15.483486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.208 [2024-11-18 08:06:15.483509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.208 [2024-11-18 08:06:15.483524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.208 [2024-11-18 08:06:15.483540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:56912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.208 [2024-11-18 08:06:15.483555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.208 [2024-11-18 08:06:15.483569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.208 [2024-11-18 08:06:15.483587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.208 [2024-11-18 08:06:15.483603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:56928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.208 [2024-11-18 08:06:15.483618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.208 [2024-11-18 08:06:15.483633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.208 [2024-11-18 08:06:15.483647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.208 [2024-11-18 08:06:15.483662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.208 [2024-11-18 08:06:15.483676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.208 [2024-11-18 08:06:15.483691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.208 [2024-11-18 08:06:15.483706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.208 [2024-11-18 08:06:15.483720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:56960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.208 [2024-11-18 08:06:15.483734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.208 [2024-11-18 08:06:15.483749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.208 [2024-11-18 08:06:15.483763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.208 [2024-11-18 08:06:15.483787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:56976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.208 [2024-11-18 08:06:15.483800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.208 [2024-11-18 08:06:15.483831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:28.208 [2024-11-18 08:06:15.483848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56984 len:8 PRP1 0x0 PRP2 0x0 00:32:28.208 [2024-11-18 08:06:15.483862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.208 [2024-11-18 08:06:15.483881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:28.208 [2024-11-18 08:06:15.483893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:28.208 [2024-11-18 08:06:15.483904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56992 len:8 PRP1 0x0 PRP2 0x0 00:32:28.208 [2024-11-18 08:06:15.483917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.208 [2024-11-18 08:06:15.483935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:28.208 [2024-11-18 08:06:15.483947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:28.208 [2024-11-18 08:06:15.483958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57000 len:8 PRP1 0x0 PRP2 0x0 00:32:28.208 [2024-11-18 08:06:15.483971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.208 [2024-11-18 08:06:15.484037] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:28.208 [2024-11-18 08:06:15.484081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.208 [2024-11-18 08:06:15.484099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.208 [2024-11-18 08:06:15.484115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.208 [2024-11-18 08:06:15.484128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.208 [2024-11-18 08:06:15.484142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.208 [2024-11-18 08:06:15.484155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.208 [2024-11-18 08:06:15.484169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.208 [2024-11-18 08:06:15.484182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.208 [2024-11-18 08:06:15.484195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:28.208 [2024-11-18 08:06:15.487486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:28.208 [2024-11-18 08:06:15.487545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc58890 (9): Bad file descriptor 00:32:28.208 [2024-11-18 08:06:15.515523] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:28.208 8416.90 IOPS, 32.88 MiB/s [2024-11-18T07:06:21.296Z] 8436.73 IOPS, 32.96 MiB/s [2024-11-18T07:06:21.296Z] 8465.42 IOPS, 33.07 MiB/s [2024-11-18T07:06:21.296Z] 8491.23 IOPS, 33.17 MiB/s [2024-11-18T07:06:21.296Z] 8496.43 IOPS, 33.19 MiB/s [2024-11-18T07:06:21.296Z] 8506.00 IOPS, 33.23 MiB/s 00:32:28.208 Latency(us) 00:32:28.208 [2024-11-18T07:06:21.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.208 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:28.208 Verification LBA range: start 0x0 length 0x4000 00:32:28.208 NVMe0n1 : 15.02 8505.38 33.22 678.04 0.00 13911.00 552.20 18641.35 00:32:28.208 [2024-11-18T07:06:21.296Z] =================================================================================================================== 00:32:28.208 [2024-11-18T07:06:21.296Z] Total : 8505.38 33.22 678.04 0.00 13911.00 552.20 18641.35 00:32:28.208 Received shutdown signal, test time was about 15.000000 seconds 00:32:28.208 00:32:28.208 Latency(us) 00:32:28.208 [2024-11-18T07:06:21.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.208 [2024-11-18T07:06:21.296Z] =================================================================================================================== 00:32:28.208 [2024-11-18T07:06:21.296Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:28.208 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:28.208 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:28.208 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:28.208 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=855663 00:32:28.208 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:28.208 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 855663 /var/tmp/bdevperf.sock 00:32:28.208 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 855663 ']' 00:32:28.208 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:28.208 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:28.208 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:28.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:28.208 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:28.208 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:28.467 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:28.467 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:28.467 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:28.726 [2024-11-18 08:06:21.769458] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:28.726 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:28.984 [2024-11-18 08:06:22.034211] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:28.984 08:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:29.551 NVMe0n1 00:32:29.551 08:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:29.809 00:32:30.069 08:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:30.328 00:32:30.328 08:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:30.328 08:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:30.586 08:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:30.844 08:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:34.133 08:06:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:34.133 08:06:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:34.133 08:06:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=856332 00:32:34.133 08:06:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:34.133 08:06:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 856332 00:32:35.514 { 00:32:35.514 "results": [ 00:32:35.514 { 00:32:35.514 "job": "NVMe0n1", 00:32:35.514 "core_mask": "0x1", 00:32:35.514 "workload": "verify", 00:32:35.514 "status": "finished", 00:32:35.514 "verify_range": { 00:32:35.514 "start": 0, 00:32:35.514 "length": 16384 00:32:35.514 }, 00:32:35.514 "queue_depth": 128, 00:32:35.514 "io_size": 4096, 00:32:35.514 "runtime": 1.008317, 00:32:35.514 "iops": 8493.360718900902, 00:32:35.514 "mibps": 33.17719030820665, 00:32:35.514 "io_failed": 0, 00:32:35.514 "io_timeout": 0, 00:32:35.514 "avg_latency_us": 14984.600292698116, 00:32:35.514 "min_latency_us": 2997.6651851851852, 00:32:35.514 "max_latency_us": 15728.64 00:32:35.514 } 00:32:35.514 ], 00:32:35.514 "core_count": 1 00:32:35.514 } 00:32:35.514 08:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:35.514 [2024-11-18 08:06:21.300838] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:32:35.514 [2024-11-18 08:06:21.300927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855663 ] 00:32:35.514 [2024-11-18 08:06:21.370293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.514 [2024-11-18 08:06:21.413967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.514 [2024-11-18 08:06:23.757023] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:35.514 [2024-11-18 08:06:23.757106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.514 [2024-11-18 08:06:23.757130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.514 [2024-11-18 08:06:23.757161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.514 [2024-11-18 08:06:23.757175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.514 [2024-11-18 08:06:23.757189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.514 [2024-11-18 08:06:23.757203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.514 [2024-11-18 08:06:23.757217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.514 [2024-11-18 08:06:23.757231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.514 [2024-11-18 08:06:23.757245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:35.514 [2024-11-18 08:06:23.757289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:35.514 [2024-11-18 08:06:23.757326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x814890 (9): Bad file descriptor 00:32:35.514 [2024-11-18 08:06:23.810926] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:35.514 Running I/O for 1 seconds... 00:32:35.514 8436.00 IOPS, 32.95 MiB/s 00:32:35.514 Latency(us) 00:32:35.514 [2024-11-18T07:06:28.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.514 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:35.514 Verification LBA range: start 0x0 length 0x4000 00:32:35.514 NVMe0n1 : 1.01 8493.36 33.18 0.00 0.00 14984.60 2997.67 15728.64 00:32:35.514 [2024-11-18T07:06:28.602Z] =================================================================================================================== 00:32:35.514 [2024-11-18T07:06:28.602Z] Total : 8493.36 33.18 0.00 0.00 14984.60 2997.67 15728.64 00:32:35.514 08:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:35.514 08:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:35.514 08:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:35.773 08:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:35.773 08:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:36.031 08:06:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:36.612 08:06:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:39.903 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:39.903 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:39.903 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 855663 00:32:39.903 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 855663 ']' 00:32:39.903 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 855663 00:32:39.903 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:39.903 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:39.903 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 855663 00:32:39.903 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:39.903 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:39.903 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 855663' 00:32:39.903 killing process with pid 855663 00:32:39.903 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 855663 00:32:39.903 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 855663 00:32:39.903 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:39.903 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:40.162 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:40.162 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:40.162 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:40.162 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:40.162 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:40.162 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:40.162 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:40.162 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:40.162 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:40.162 rmmod nvme_tcp 00:32:40.162 rmmod nvme_fabrics 00:32:40.162 rmmod nvme_keyring 00:32:40.162 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:40.162 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:40.162 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:40.162 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 852892 ']' 00:32:40.162 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 852892 00:32:40.162 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 852892 ']' 00:32:40.162 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 852892 00:32:40.162 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:40.162 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:40.162 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 852892 00:32:40.420 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:40.420 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:40.420 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 852892' 00:32:40.420 killing process with pid 852892 00:32:40.420 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 852892 00:32:40.420 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 852892 00:32:40.420 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:40.420 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:40.420 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:40.420 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:40.420 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:32:40.420 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:40.420 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:32:40.420 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:40.420 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:40.420 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.420 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:40.420 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:43.004 00:32:43.004 real 0m35.490s 00:32:43.004 user 2m5.673s 00:32:43.004 sys 0m5.875s 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:43.004 ************************************ 00:32:43.004 END TEST nvmf_failover 00:32:43.004 ************************************ 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.004 ************************************ 00:32:43.004 START TEST nvmf_host_discovery 00:32:43.004 ************************************ 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:43.004 * Looking for test storage... 00:32:43.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:43.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.004 --rc genhtml_branch_coverage=1 00:32:43.004 --rc genhtml_function_coverage=1 00:32:43.004 --rc genhtml_legend=1 00:32:43.004 --rc geninfo_all_blocks=1 00:32:43.004 --rc geninfo_unexecuted_blocks=1 00:32:43.004 00:32:43.004 ' 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:43.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.004 --rc genhtml_branch_coverage=1 00:32:43.004 --rc genhtml_function_coverage=1 00:32:43.004 --rc genhtml_legend=1 00:32:43.004 --rc geninfo_all_blocks=1 00:32:43.004 --rc geninfo_unexecuted_blocks=1 00:32:43.004 00:32:43.004 ' 00:32:43.004 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:43.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.005 --rc genhtml_branch_coverage=1 00:32:43.005 --rc genhtml_function_coverage=1 00:32:43.005 --rc genhtml_legend=1 00:32:43.005 --rc geninfo_all_blocks=1 00:32:43.005 --rc geninfo_unexecuted_blocks=1 00:32:43.005 00:32:43.005 ' 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:43.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.005 --rc genhtml_branch_coverage=1 00:32:43.005 --rc genhtml_function_coverage=1 00:32:43.005 --rc genhtml_legend=1 00:32:43.005 --rc geninfo_all_blocks=1 00:32:43.005 --rc geninfo_unexecuted_blocks=1 00:32:43.005 00:32:43.005 ' 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:43.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:43.005 08:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:44.909 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:44.909 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:44.909 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:44.910 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:44.910 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:44.910 08:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:45.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:45.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:32:45.169 00:32:45.169 --- 10.0.0.2 ping statistics --- 00:32:45.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.169 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:45.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:45.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:32:45.169 00:32:45.169 --- 10.0.0.1 ping statistics --- 00:32:45.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.169 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=859055 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 859055 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 859055 ']' 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:45.169 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.169 [2024-11-18 08:06:38.245301] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:32:45.169 [2024-11-18 08:06:38.245381] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:45.429 [2024-11-18 08:06:38.320220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.429 [2024-11-18 08:06:38.364644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:45.429 [2024-11-18 08:06:38.364699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:45.429 [2024-11-18 08:06:38.364712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:45.429 [2024-11-18 08:06:38.364723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:45.429 [2024-11-18 08:06:38.364733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:45.429 [2024-11-18 08:06:38.365276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.429 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:45.429 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:45.429 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:45.429 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:45.429 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.429 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:45.429 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:45.429 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.429 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.429 [2024-11-18 08:06:38.500880] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:45.429 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.429 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:45.429 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.429 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.429 [2024-11-18 08:06:38.509107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:45.429 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.429 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:45.429 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.429 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.688 null0 00:32:45.688 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.688 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:45.688 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.688 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.688 null1 00:32:45.688 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.688 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:45.688 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.688 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.688 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.688 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=859080 00:32:45.688 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:45.688 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 859080 /tmp/host.sock 00:32:45.688 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 859080 ']' 00:32:45.688 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:32:45.688 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:45.688 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:45.688 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:45.688 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:45.688 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.688 [2024-11-18 08:06:38.582056] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:32:45.688 [2024-11-18 08:06:38.582137] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid859080 ] 00:32:45.688 [2024-11-18 08:06:38.646666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.688 [2024-11-18 08:06:38.691539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.947 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:45.948 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:45.948 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.948 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:45.948 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.948 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:45.948 08:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:45.948 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.948 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:45.948 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:45.948 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:45.948 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.948 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:45.948 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.948 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:45.948 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:46.219 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.219 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.220 [2024-11-18 08:06:39.070591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:32:46.220 08:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:46.786 [2024-11-18 08:06:39.837831] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:46.786 [2024-11-18 08:06:39.837856] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:46.786 [2024-11-18 08:06:39.837880] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:47.046 [2024-11-18 08:06:39.924164] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:47.046 [2024-11-18 08:06:40.099264] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:47.046 [2024-11-18 08:06:40.100387] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x17c4740:1 started. 00:32:47.046 [2024-11-18 08:06:40.102194] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:47.046 [2024-11-18 08:06:40.102217] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:47.046 [2024-11-18 08:06:40.106883] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x17c4740 was disconnected and freed. delete nvme_qpair. 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.305 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.565 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:47.565 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:47.565 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:47.565 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:47.565 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:47.565 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.565 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.565 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.565 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:47.565 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:47.565 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:47.565 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:47.565 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:47.565 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:47.565 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:47.565 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.565 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:47.565 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.565 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:47.565 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:47.565 [2024-11-18 08:06:40.411902] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x17c4940:1 started. 00:32:47.566 [2024-11-18 08:06:40.417224] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x17c4940 was disconnected and freed. delete nvme_qpair. 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.566 [2024-11-18 08:06:40.498848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:47.566 [2024-11-18 08:06:40.499624] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:47.566 [2024-11-18 08:06:40.499671] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.566 [2024-11-18 08:06:40.586350] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:47.566 08:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:47.566 [2024-11-18 08:06:40.652178] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:32:47.566 [2024-11-18 08:06:40.652225] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:47.566 [2024-11-18 08:06:40.652241] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:47.566 [2024-11-18 08:06:40.652255] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.996 [2024-11-18 08:06:41.719465] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:48.996 [2024-11-18 08:06:41.719533] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:48.996 [2024-11-18 08:06:41.723535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:48.996 [2024-11-18 08:06:41.723586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.996 [2024-11-18 08:06:41.723604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:48.996 [2024-11-18 08:06:41.723618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.996 [2024-11-18 08:06:41.723633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:48.996 [2024-11-18 08:06:41.723647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.996 [2024-11-18 08:06:41.723661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:48.996 [2024-11-18 08:06:41.723676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.996 [2024-11-18 08:06:41.723690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796900 is same with the state(6) to be set 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:48.996 [2024-11-18 08:06:41.733527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1796900 (9): Bad file descriptor 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.996 [2024-11-18 08:06:41.743555] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:48.996 [2024-11-18 08:06:41.743578] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:48.996 [2024-11-18 08:06:41.743589] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:48.996 [2024-11-18 08:06:41.743598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:48.996 [2024-11-18 08:06:41.743647] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:48.996 [2024-11-18 08:06:41.743824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.996 [2024-11-18 08:06:41.743853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1796900 with addr=10.0.0.2, port=4420 00:32:48.996 [2024-11-18 08:06:41.743871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796900 is same with the state(6) to be set 00:32:48.996 [2024-11-18 08:06:41.743901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1796900 (9): Bad file descriptor 00:32:48.996 [2024-11-18 08:06:41.743924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:48.996 [2024-11-18 08:06:41.743939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:48.996 [2024-11-18 08:06:41.743957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:48.996 [2024-11-18 08:06:41.743970] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:48.996 [2024-11-18 08:06:41.743981] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:48.996 [2024-11-18 08:06:41.743989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:48.996 [2024-11-18 08:06:41.753681] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:48.996 [2024-11-18 08:06:41.753705] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:48.996 [2024-11-18 08:06:41.753714] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:48.996 [2024-11-18 08:06:41.753722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:48.996 [2024-11-18 08:06:41.753764] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:48.996 [2024-11-18 08:06:41.753957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.996 [2024-11-18 08:06:41.753985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1796900 with addr=10.0.0.2, port=4420 00:32:48.996 [2024-11-18 08:06:41.754003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796900 is same with the state(6) to be set 00:32:48.996 [2024-11-18 08:06:41.754025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1796900 (9): Bad file descriptor 00:32:48.996 [2024-11-18 08:06:41.754046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:48.996 [2024-11-18 08:06:41.754060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:48.996 [2024-11-18 08:06:41.754073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:48.996 [2024-11-18 08:06:41.754085] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:48.996 [2024-11-18 08:06:41.754094] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:48.996 [2024-11-18 08:06:41.754102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:48.996 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:48.997 [2024-11-18 08:06:41.763802] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:48.997 [2024-11-18 08:06:41.763827] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:48.997 [2024-11-18 08:06:41.763846] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:48.997 [2024-11-18 08:06:41.763857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:48.997 [2024-11-18 08:06:41.763884] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:48.997 [2024-11-18 08:06:41.763987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.997 [2024-11-18 08:06:41.764015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1796900 with addr=10.0.0.2, port=4420 00:32:48.997 [2024-11-18 08:06:41.764032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796900 is same with the state(6) to be set 00:32:48.997 [2024-11-18 08:06:41.764054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1796900 (9): Bad file descriptor 00:32:48.997 [2024-11-18 08:06:41.764083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:48.997 [2024-11-18 08:06:41.764097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:48.997 [2024-11-18 08:06:41.764111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:48.997 [2024-11-18 08:06:41.764123] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:48.997 [2024-11-18 08:06:41.764132] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:48.997 [2024-11-18 08:06:41.764140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:48.997 [2024-11-18 08:06:41.773920] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:48.997 [2024-11-18 08:06:41.773944] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:48.997 [2024-11-18 08:06:41.773954] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:48.997 [2024-11-18 08:06:41.773962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:48.997 [2024-11-18 08:06:41.774004] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:48.997 [2024-11-18 08:06:41.774137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.997 [2024-11-18 08:06:41.774164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1796900 with addr=10.0.0.2, port=4420 00:32:48.997 [2024-11-18 08:06:41.774181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796900 is same with the state(6) to be set 00:32:48.997 [2024-11-18 08:06:41.774203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1796900 (9): Bad file descriptor 00:32:48.997 [2024-11-18 08:06:41.774238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:48.997 [2024-11-18 08:06:41.774268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:48.997 [2024-11-18 08:06:41.774282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:48.997 [2024-11-18 08:06:41.774294] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:48.997 [2024-11-18 08:06:41.774303] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:48.997 [2024-11-18 08:06:41.774311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:48.997 [2024-11-18 08:06:41.784038] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:48.997 [2024-11-18 08:06:41.784060] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:48.997 [2024-11-18 08:06:41.784069] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:48.997 [2024-11-18 08:06:41.784076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:48.997 [2024-11-18 08:06:41.784116] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:48.997 [2024-11-18 08:06:41.784250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.997 [2024-11-18 08:06:41.784277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1796900 with addr=10.0.0.2, port=4420 00:32:48.997 [2024-11-18 08:06:41.784293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796900 is same with the state(6) to be set 00:32:48.997 [2024-11-18 08:06:41.784315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1796900 (9): Bad file descriptor 00:32:48.997 [2024-11-18 08:06:41.784349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:48.997 [2024-11-18 08:06:41.784366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:48.997 [2024-11-18 08:06:41.784379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:48.997 [2024-11-18 08:06:41.784391] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:48.997 [2024-11-18 08:06:41.784400] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:48.997 [2024-11-18 08:06:41.784408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.997 [2024-11-18 08:06:41.794150] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:48.997 [2024-11-18 08:06:41.794169] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:48.997 [2024-11-18 08:06:41.794178] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:48.997 [2024-11-18 08:06:41.794185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:48.997 [2024-11-18 08:06:41.794222] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:48.997 [2024-11-18 08:06:41.794440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.997 [2024-11-18 08:06:41.794466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1796900 with addr=10.0.0.2, port=4420 00:32:48.997 [2024-11-18 08:06:41.794485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796900 is same with the state(6) to be set 00:32:48.997 [2024-11-18 08:06:41.794523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1796900 (9): Bad file descriptor 00:32:48.997 [2024-11-18 08:06:41.794559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:48.997 [2024-11-18 08:06:41.794577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:48.997 [2024-11-18 08:06:41.794590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:48.997 [2024-11-18 08:06:41.794602] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:48.997 [2024-11-18 08:06:41.794611] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:48.997 [2024-11-18 08:06:41.794619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:48.997 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:48.997 [2024-11-18 08:06:41.804256] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:48.997 [2024-11-18 08:06:41.804276] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:48.997 [2024-11-18 08:06:41.804285] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:48.997 [2024-11-18 08:06:41.804293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:48.997 [2024-11-18 08:06:41.804330] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:48.997 [2024-11-18 08:06:41.804473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.998 [2024-11-18 08:06:41.804508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1796900 with addr=10.0.0.2, port=4420 00:32:48.998 [2024-11-18 08:06:41.804528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796900 is same with the state(6) to be set 00:32:48.998 [2024-11-18 08:06:41.804559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1796900 (9): Bad file descriptor 00:32:48.998 [2024-11-18 08:06:41.804580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:48.998 [2024-11-18 08:06:41.804595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:48.998 [2024-11-18 08:06:41.804613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:48.998 [2024-11-18 08:06:41.804626] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:48.998 [2024-11-18 08:06:41.804637] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:48.998 [2024-11-18 08:06:41.804645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:48.998 [2024-11-18 08:06:41.806638] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:48.998 [2024-11-18 08:06:41.806670] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:48.998 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.998 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:32:48.998 08:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:49.939 08:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.939 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:49.939 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:49.939 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:49.939 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:49.939 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:49.939 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:49.939 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:49.939 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:49.939 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:49.939 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:49.939 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:49.939 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:49.939 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.939 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:49.939 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.197 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:50.197 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:50.197 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:50.198 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:50.198 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:50.198 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.198 08:06:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.134 [2024-11-18 08:06:44.109180] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:51.134 [2024-11-18 08:06:44.109206] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:51.134 [2024-11-18 08:06:44.109229] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:51.393 [2024-11-18 08:06:44.235653] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:51.393 [2024-11-18 08:06:44.334368] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:32:51.393 [2024-11-18 08:06:44.335218] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x17920e0:1 started. 00:32:51.393 [2024-11-18 08:06:44.337405] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:51.393 [2024-11-18 08:06:44.337436] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:51.393 [2024-11-18 08:06:44.339037] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x17920e0 was disconnected and freed. delete nvme_qpair. 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.393 request: 00:32:51.393 { 00:32:51.393 "name": "nvme", 00:32:51.393 "trtype": "tcp", 00:32:51.393 "traddr": "10.0.0.2", 00:32:51.393 "adrfam": "ipv4", 00:32:51.393 "trsvcid": "8009", 00:32:51.393 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:51.393 "wait_for_attach": true, 00:32:51.393 "method": "bdev_nvme_start_discovery", 00:32:51.393 "req_id": 1 00:32:51.393 } 00:32:51.393 Got JSON-RPC error response 00:32:51.393 response: 00:32:51.393 { 00:32:51.393 "code": -17, 00:32:51.393 "message": "File exists" 00:32:51.393 } 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.393 request: 00:32:51.393 { 00:32:51.393 "name": "nvme_second", 00:32:51.393 "trtype": "tcp", 00:32:51.393 "traddr": "10.0.0.2", 00:32:51.393 "adrfam": "ipv4", 00:32:51.393 "trsvcid": "8009", 00:32:51.393 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:51.393 "wait_for_attach": true, 00:32:51.393 "method": "bdev_nvme_start_discovery", 00:32:51.393 "req_id": 1 00:32:51.393 } 00:32:51.393 Got JSON-RPC error response 00:32:51.393 response: 00:32:51.393 { 00:32:51.393 "code": -17, 00:32:51.393 "message": "File exists" 00:32:51.393 } 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:51.393 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.654 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:51.654 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:51.654 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:51.654 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:51.654 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.654 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:51.654 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.654 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:51.654 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.654 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:51.654 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:51.654 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:51.654 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:51.654 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:51.654 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:51.654 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:51.654 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:51.654 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:51.654 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.654 08:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.590 [2024-11-18 08:06:45.560862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.590 [2024-11-18 08:06:45.560917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c4090 with addr=10.0.0.2, port=8010 00:32:52.590 [2024-11-18 08:06:45.560941] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:52.590 [2024-11-18 08:06:45.560955] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:52.590 [2024-11-18 08:06:45.560968] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:53.525 [2024-11-18 08:06:46.563282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.525 [2024-11-18 08:06:46.563317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c4090 with addr=10.0.0.2, port=8010 00:32:53.525 [2024-11-18 08:06:46.563338] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:53.525 [2024-11-18 08:06:46.563351] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:53.525 [2024-11-18 08:06:46.563363] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:54.907 [2024-11-18 08:06:47.565530] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:54.907 request: 00:32:54.907 { 00:32:54.907 "name": "nvme_second", 00:32:54.907 "trtype": "tcp", 00:32:54.907 "traddr": "10.0.0.2", 00:32:54.907 "adrfam": "ipv4", 00:32:54.907 "trsvcid": "8010", 00:32:54.907 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:54.907 "wait_for_attach": false, 00:32:54.907 "attach_timeout_ms": 3000, 00:32:54.907 "method": "bdev_nvme_start_discovery", 00:32:54.907 "req_id": 1 00:32:54.907 } 00:32:54.907 Got JSON-RPC error response 00:32:54.907 response: 00:32:54.907 { 00:32:54.907 "code": -110, 00:32:54.907 "message": "Connection timed out" 00:32:54.907 } 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 859080 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:54.907 rmmod nvme_tcp 00:32:54.907 rmmod nvme_fabrics 00:32:54.907 rmmod nvme_keyring 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 859055 ']' 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 859055 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 859055 ']' 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 859055 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 859055 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 859055' 00:32:54.907 killing process with pid 859055 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 859055 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 859055 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:54.907 08:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.447 08:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:57.447 00:32:57.447 real 0m14.413s 00:32:57.447 user 0m20.940s 00:32:57.447 sys 0m2.915s 00:32:57.447 08:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:57.447 08:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.447 ************************************ 00:32:57.447 END TEST nvmf_host_discovery 00:32:57.447 ************************************ 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.448 ************************************ 00:32:57.448 START TEST nvmf_host_multipath_status 00:32:57.448 ************************************ 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:57.448 * Looking for test storage... 00:32:57.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:57.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.448 --rc genhtml_branch_coverage=1 00:32:57.448 --rc genhtml_function_coverage=1 00:32:57.448 --rc genhtml_legend=1 00:32:57.448 --rc geninfo_all_blocks=1 00:32:57.448 --rc geninfo_unexecuted_blocks=1 00:32:57.448 00:32:57.448 ' 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:57.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.448 --rc genhtml_branch_coverage=1 00:32:57.448 --rc genhtml_function_coverage=1 00:32:57.448 --rc genhtml_legend=1 00:32:57.448 --rc geninfo_all_blocks=1 00:32:57.448 --rc geninfo_unexecuted_blocks=1 00:32:57.448 00:32:57.448 ' 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:57.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.448 --rc genhtml_branch_coverage=1 00:32:57.448 --rc genhtml_function_coverage=1 00:32:57.448 --rc genhtml_legend=1 00:32:57.448 --rc geninfo_all_blocks=1 00:32:57.448 --rc geninfo_unexecuted_blocks=1 00:32:57.448 00:32:57.448 ' 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:57.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.448 --rc genhtml_branch_coverage=1 00:32:57.448 --rc genhtml_function_coverage=1 00:32:57.448 --rc genhtml_legend=1 00:32:57.448 --rc geninfo_all_blocks=1 00:32:57.448 --rc geninfo_unexecuted_blocks=1 00:32:57.448 00:32:57.448 ' 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:57.448 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:57.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:57.449 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:59.353 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:59.354 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:59.354 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:59.354 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:59.354 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:59.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:59.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:32:59.354 00:32:59.354 --- 10.0.0.2 ping statistics --- 00:32:59.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.354 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:32:59.354 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:59.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:59.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:32:59.354 00:32:59.354 --- 10.0.0.1 ping statistics --- 00:32:59.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.355 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:32:59.355 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:59.355 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:32:59.355 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:59.355 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:59.355 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:59.355 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:59.355 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:59.355 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:59.355 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:59.615 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:59.615 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:59.615 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:59.615 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:59.615 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=862256 00:32:59.615 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:59.615 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 862256 00:32:59.615 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 862256 ']' 00:32:59.615 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.615 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:59.615 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.615 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:59.615 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:59.615 [2024-11-18 08:06:52.503017] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:32:59.615 [2024-11-18 08:06:52.503109] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:59.615 [2024-11-18 08:06:52.579009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:59.615 [2024-11-18 08:06:52.626652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:59.615 [2024-11-18 08:06:52.626714] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:59.615 [2024-11-18 08:06:52.626729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:59.615 [2024-11-18 08:06:52.626740] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:59.615 [2024-11-18 08:06:52.626751] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:59.615 [2024-11-18 08:06:52.628230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:59.615 [2024-11-18 08:06:52.628235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.874 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:59.874 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:59.874 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:59.874 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:59.874 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:59.874 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:59.874 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=862256 00:32:59.874 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:00.132 [2024-11-18 08:06:53.080384] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:00.132 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:00.390 Malloc0 00:33:00.390 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:00.649 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:00.907 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:01.166 [2024-11-18 08:06:54.176514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:01.166 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:01.424 [2024-11-18 08:06:54.449204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:01.424 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=862541 00:33:01.424 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:01.424 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:01.424 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 862541 /var/tmp/bdevperf.sock 00:33:01.424 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 862541 ']' 00:33:01.424 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:01.424 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:01.424 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:01.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:01.424 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:01.424 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:01.682 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:01.682 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:01.682 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:01.940 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:02.507 Nvme0n1 00:33:02.507 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:03.073 Nvme0n1 00:33:03.073 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:03.073 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:04.978 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:04.978 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:05.236 08:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:05.495 08:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:06.875 08:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:06.875 08:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:06.875 08:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.875 08:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:06.875 08:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.875 08:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:06.875 08:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.875 08:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:07.133 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:07.133 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:07.133 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.133 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:07.390 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.390 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:07.390 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.391 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:07.648 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.648 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:07.648 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.648 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:07.906 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.906 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:07.906 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.906 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:08.164 08:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.164 08:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:08.164 08:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:08.423 08:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:08.993 08:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:09.932 08:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:09.932 08:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:09.932 08:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.932 08:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:10.190 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:10.190 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:10.190 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.190 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:10.448 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.448 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:10.448 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.448 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:10.706 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.706 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:10.706 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.706 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:10.964 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.964 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:10.964 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.964 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:11.222 08:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.222 08:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:11.222 08:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.222 08:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:11.480 08:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.481 08:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:11.481 08:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:11.739 08:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:11.998 08:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:12.937 08:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:12.937 08:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:12.937 08:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.937 08:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:13.196 08:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.196 08:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:13.196 08:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.196 08:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:13.764 08:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:13.764 08:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:13.764 08:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.764 08:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:13.764 08:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.764 08:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:13.764 08:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.764 08:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:14.023 08:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.023 08:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:14.023 08:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.023 08:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:14.282 08:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.282 08:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:14.542 08:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.542 08:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:14.800 08:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.800 08:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:14.800 08:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:15.059 08:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:15.318 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:16.255 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:16.255 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:16.255 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.255 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:16.512 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.512 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:16.512 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.512 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:16.771 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:16.771 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:16.771 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.771 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:17.029 08:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.029 08:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:17.029 08:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.029 08:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:17.310 08:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.310 08:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:17.310 08:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.310 08:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:17.568 08:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.568 08:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:17.568 08:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.568 08:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:17.827 08:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:17.827 08:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:17.827 08:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:18.398 08:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:18.398 08:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:19.785 08:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:19.785 08:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:19.785 08:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.785 08:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:19.785 08:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:19.785 08:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:19.785 08:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.785 08:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:20.042 08:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:20.042 08:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:20.042 08:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.042 08:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:20.301 08:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.301 08:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:20.301 08:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.301 08:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:20.587 08:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.587 08:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:20.587 08:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.587 08:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:20.871 08:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:20.871 08:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:20.871 08:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.871 08:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:21.129 08:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:21.129 08:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:21.129 08:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:21.388 08:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:21.648 08:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:22.583 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:22.583 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:22.583 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.583 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:23.149 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:23.149 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:23.149 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.149 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:23.149 08:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.149 08:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:23.149 08:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.149 08:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:23.407 08:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.407 08:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:23.407 08:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.407 08:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:23.973 08:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.973 08:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:23.973 08:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.973 08:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:23.973 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:23.973 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:23.973 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.973 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:24.231 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.231 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:24.489 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:24.489 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:25.056 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:25.056 08:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:26.430 08:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:26.430 08:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:26.430 08:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.430 08:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:26.430 08:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.430 08:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:26.430 08:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.430 08:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:26.688 08:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.688 08:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:26.688 08:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.688 08:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:26.946 08:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.946 08:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:26.946 08:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.946 08:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:27.204 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.204 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:27.204 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.204 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:27.462 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.462 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:27.462 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.462 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:28.029 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:28.029 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:28.029 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:28.029 08:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:28.287 08:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:29.662 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:29.662 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:29.662 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.662 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:29.662 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:29.662 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:29.662 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.662 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:29.920 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.920 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:29.920 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.920 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:30.178 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.178 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:30.178 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.178 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:30.436 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.436 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:30.436 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.436 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:30.695 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.695 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:30.695 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.695 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:30.953 08:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.953 08:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:30.953 08:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:31.212 08:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:31.778 08:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:32.711 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:32.711 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:32.711 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.711 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:32.970 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:32.970 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:32.970 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.970 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:33.228 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.228 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:33.228 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.228 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:33.487 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.487 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:33.487 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.487 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:33.745 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.745 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:33.745 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.745 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:34.003 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.003 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:34.003 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.003 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:34.262 08:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.262 08:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:34.262 08:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:34.521 08:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:34.782 08:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:35.716 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:35.716 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:35.716 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.716 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:35.975 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:35.975 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:36.233 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.233 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:36.491 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:36.491 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:36.491 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.491 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:36.748 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.748 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:36.748 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.748 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:37.006 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.006 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:37.006 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.006 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:37.264 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.264 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:37.264 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.264 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:37.522 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:37.522 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 862541 00:33:37.522 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 862541 ']' 00:33:37.522 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 862541 00:33:37.522 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:37.522 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:37.522 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 862541 00:33:37.522 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:37.522 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:37.522 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 862541' 00:33:37.522 killing process with pid 862541 00:33:37.522 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 862541 00:33:37.522 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 862541 00:33:37.522 { 00:33:37.522 "results": [ 00:33:37.522 { 00:33:37.522 "job": "Nvme0n1", 00:33:37.522 "core_mask": "0x4", 00:33:37.522 "workload": "verify", 00:33:37.522 "status": "terminated", 00:33:37.522 "verify_range": { 00:33:37.522 "start": 0, 00:33:37.522 "length": 16384 00:33:37.522 }, 00:33:37.522 "queue_depth": 128, 00:33:37.522 "io_size": 4096, 00:33:37.522 "runtime": 34.406332, 00:33:37.522 "iops": 7881.83407635548, 00:33:37.522 "mibps": 30.788414360763593, 00:33:37.522 "io_failed": 0, 00:33:37.522 "io_timeout": 0, 00:33:37.522 "avg_latency_us": 16212.412244564493, 00:33:37.522 "min_latency_us": 661.4281481481481, 00:33:37.522 "max_latency_us": 4026531.84 00:33:37.522 } 00:33:37.522 ], 00:33:37.522 "core_count": 1 00:33:37.522 } 00:33:37.792 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 862541 00:33:37.792 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:37.792 [2024-11-18 08:06:54.511694] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:33:37.792 [2024-11-18 08:06:54.511801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862541 ] 00:33:37.792 [2024-11-18 08:06:54.580888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.792 [2024-11-18 08:06:54.628922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:37.792 Running I/O for 90 seconds... 00:33:37.792 8573.00 IOPS, 33.49 MiB/s [2024-11-18T07:07:30.880Z] 8651.00 IOPS, 33.79 MiB/s [2024-11-18T07:07:30.880Z] 8504.00 IOPS, 33.22 MiB/s [2024-11-18T07:07:30.880Z] 8504.00 IOPS, 33.22 MiB/s [2024-11-18T07:07:30.880Z] 8464.40 IOPS, 33.06 MiB/s [2024-11-18T07:07:30.880Z] 8471.83 IOPS, 33.09 MiB/s [2024-11-18T07:07:30.880Z] 8467.43 IOPS, 33.08 MiB/s [2024-11-18T07:07:30.880Z] 8468.88 IOPS, 33.08 MiB/s [2024-11-18T07:07:30.880Z] 8478.44 IOPS, 33.12 MiB/s [2024-11-18T07:07:30.880Z] 8467.50 IOPS, 33.08 MiB/s [2024-11-18T07:07:30.880Z] 8440.91 IOPS, 32.97 MiB/s [2024-11-18T07:07:30.880Z] 8432.33 IOPS, 32.94 MiB/s [2024-11-18T07:07:30.880Z] 8436.00 IOPS, 32.95 MiB/s [2024-11-18T07:07:30.880Z] 8410.21 IOPS, 32.85 MiB/s [2024-11-18T07:07:30.880Z] 8389.00 IOPS, 32.77 MiB/s [2024-11-18T07:07:30.880Z] [2024-11-18 08:07:11.160567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.792 [2024-11-18 08:07:11.160626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:37.792 [2024-11-18 08:07:11.160728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.792 [2024-11-18 08:07:11.160751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:37.792 [2024-11-18 08:07:11.160789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.792 [2024-11-18 08:07:11.160807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:37.792 [2024-11-18 08:07:11.160846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.792 [2024-11-18 08:07:11.160865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:37.792 [2024-11-18 08:07:11.160888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.792 [2024-11-18 08:07:11.160905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:37.792 [2024-11-18 08:07:11.160928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.792 [2024-11-18 08:07:11.160944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:37.792 [2024-11-18 08:07:11.160965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.792 [2024-11-18 08:07:11.160981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:37.792 [2024-11-18 08:07:11.161018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.792 [2024-11-18 08:07:11.161044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:37.792 [2024-11-18 08:07:11.161086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.792 [2024-11-18 08:07:11.161104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:37.792 [2024-11-18 08:07:11.161137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.161155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.161177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.161193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.161215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.161231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.161253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.161269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.161291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.161308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.161329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.161346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.161380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.161397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.161419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.161435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.161457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.161496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.161524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.161541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.161564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.161582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.161606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.161623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.161646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.161678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.161704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.161721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.161744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.161761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.162060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.162096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.162126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.162146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.162171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.162188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.162212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.162229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.162254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.793 [2024-11-18 08:07:11.162271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.162306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.793 [2024-11-18 08:07:11.162323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.162348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.793 [2024-11-18 08:07:11.162366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.162390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.793 [2024-11-18 08:07:11.162408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.162433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.793 [2024-11-18 08:07:11.162462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.162498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.793 [2024-11-18 08:07:11.162523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.162558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.793 [2024-11-18 08:07:11.162575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.162599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.793 [2024-11-18 08:07:11.162616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.162641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.162659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.162690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.162707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.162732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.162749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.162773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.162790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.162821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.162837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.162862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.162878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.162903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.162919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.162942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.162959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.162983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.163000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.163023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.163040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.163068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.163086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.793 [2024-11-18 08:07:11.163110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.793 [2024-11-18 08:07:11.163126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.163151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.163167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.163191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.794 [2024-11-18 08:07:11.163207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.163231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.163248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.163271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.163288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.163312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.163329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.163355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.163372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.163396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.163413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.163437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.163454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.163478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.163504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.163536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.163553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.163581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.163598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.163622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.163639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.163663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.163679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.163703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.163720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.163758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.163776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.163822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.163843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.163868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.163885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.163909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.163926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.163950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.163966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.163991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.164008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.164031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.164048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.164073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.164090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.164114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.164136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.164161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.164178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.164210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.164228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.164379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.164402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.164434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.164453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.164481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.164507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.164535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.164560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.164588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.164605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.164632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.164649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.164676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.164693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.164721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.164738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.164765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.164782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.164815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.164837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.164865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.164882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.164909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.164927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.164955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.794 [2024-11-18 08:07:11.164972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:37.794 [2024-11-18 08:07:11.164999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.795 [2024-11-18 08:07:11.165016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.165043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.795 [2024-11-18 08:07:11.165060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.165094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.165112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.165139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.165156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.165184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.165201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.165228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.165245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.165272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.165289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.165317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.165333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.165360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.165378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.165413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.165431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.165459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.165476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.165514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.165543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.165570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.165588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.165615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.165632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.165660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.165676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.165704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.165721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.165748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.165765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.165792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.795 [2024-11-18 08:07:11.165808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.165842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.165860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.165887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.165904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.165931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.165947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.165979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.165997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.166023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.166040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.166066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.166084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.166110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.166128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.166154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.166171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.166198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.166216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.166242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.166259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.166286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.166303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.166330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.166346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.166373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.166389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.166418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.166436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.166463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.166480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.166516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.166545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.166578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.166596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.166623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.166639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.166666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.166684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.166711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.795 [2024-11-18 08:07:11.166728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:37.795 [2024-11-18 08:07:11.166755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.796 [2024-11-18 08:07:11.166772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:11.166798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.796 [2024-11-18 08:07:11.166815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:11.166842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.796 [2024-11-18 08:07:11.166863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:11.166890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.796 [2024-11-18 08:07:11.166907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:37.796 7913.44 IOPS, 30.91 MiB/s [2024-11-18T07:07:30.884Z] 7447.94 IOPS, 29.09 MiB/s [2024-11-18T07:07:30.884Z] 7034.17 IOPS, 27.48 MiB/s [2024-11-18T07:07:30.884Z] 6663.95 IOPS, 26.03 MiB/s [2024-11-18T07:07:30.884Z] 6712.55 IOPS, 26.22 MiB/s [2024-11-18T07:07:30.884Z] 6799.14 IOPS, 26.56 MiB/s [2024-11-18T07:07:30.884Z] 6902.77 IOPS, 26.96 MiB/s [2024-11-18T07:07:30.884Z] 7066.52 IOPS, 27.60 MiB/s [2024-11-18T07:07:30.884Z] 7240.58 IOPS, 28.28 MiB/s [2024-11-18T07:07:30.884Z] 7386.36 IOPS, 28.85 MiB/s [2024-11-18T07:07:30.884Z] 7429.69 IOPS, 29.02 MiB/s [2024-11-18T07:07:30.884Z] 7470.96 IOPS, 29.18 MiB/s [2024-11-18T07:07:30.884Z] 7502.57 IOPS, 29.31 MiB/s [2024-11-18T07:07:30.884Z] 7580.93 IOPS, 29.61 MiB/s [2024-11-18T07:07:30.884Z] 7691.50 IOPS, 30.04 MiB/s [2024-11-18T07:07:30.884Z] 7798.06 IOPS, 30.46 MiB/s [2024-11-18T07:07:30.884Z] [2024-11-18 08:07:27.772124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:27664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.772220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.772272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:27680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.772291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.772325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.772344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.772367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.772385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.772408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.772424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.772461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:27744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.772478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.772523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:27760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.772542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.772566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.772598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.772622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:27792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.772638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.772660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.796 [2024-11-18 08:07:27.772677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.772698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:27808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.772714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.772736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:27824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.772752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.772773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.772789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.772826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.772842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.772868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:27872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.772884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.772906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.772921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.772942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.772958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.772979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:27920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.772995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.773016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.773032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.773053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.796 [2024-11-18 08:07:27.773069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.773089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.796 [2024-11-18 08:07:27.773105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.773126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.796 [2024-11-18 08:07:27.773141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.773162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:27952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.773177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.773198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.773214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.773235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.773250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.773271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.773286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.773307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.773326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.773349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:28032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.773366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.773387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.773404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.773425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.773440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.773461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.796 [2024-11-18 08:07:27.773502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:37.796 [2024-11-18 08:07:27.773528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.773545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.773566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.773582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.773605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.773621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.773644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:28144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.773659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.773681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.773698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.773720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.797 [2024-11-18 08:07:27.773737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.773758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.773774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.773812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:28200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.773831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.773853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:28216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.773869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.773891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:28232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.773906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.773927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:28248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.773943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.773965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:28264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.773980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.774001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:28280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.774017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.774039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:28296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.774055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.774076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.774091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.775421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.775505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.775542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.775561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.775584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:28360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.775602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.775625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:28376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.775642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.775664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.775681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.775709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.797 [2024-11-18 08:07:27.775726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.775748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:28416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.775765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.775788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:28432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.775804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.775841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.775858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.775881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.775897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.775918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.775934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.775956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:28496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.797 [2024-11-18 08:07:27.775972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.775994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.797 [2024-11-18 08:07:27.776025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.776056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.797 [2024-11-18 08:07:27.776089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.776113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.797 [2024-11-18 08:07:27.776129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.776151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.797 [2024-11-18 08:07:27.776167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.776188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.797 [2024-11-18 08:07:27.776204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.776246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.797 [2024-11-18 08:07:27.776264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:37.797 [2024-11-18 08:07:27.776286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.798 [2024-11-18 08:07:27.776304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.776326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.798 [2024-11-18 08:07:27.776343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.776365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.798 [2024-11-18 08:07:27.776396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.776419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.776435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.777036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.777061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.777104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.777122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.777159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.777181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.777204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.777221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.777242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:28600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.777258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.777312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:28616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.777331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.777371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.777389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.777412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.777435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.777460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:27712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.777478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.777508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.777529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.777551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:27776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.777568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.777606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.798 [2024-11-18 08:07:27.777623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.777646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:27824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.777662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.777684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.777699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.777721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.777737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.777759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.777789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.777812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.798 [2024-11-18 08:07:27.777827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.777854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.798 [2024-11-18 08:07:27.777870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.777891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.777907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.777928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.777948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.777970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.777986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.778007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.778022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.778059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.778076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.778098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.778114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.778135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.778151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.778172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.778189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.778210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:28216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.778226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.778247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.778263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.778285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:28280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.778301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.778323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.798 [2024-11-18 08:07:27.778358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.780026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.798 [2024-11-18 08:07:27.780052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.780082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.798 [2024-11-18 08:07:27.780101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.780129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.798 [2024-11-18 08:07:27.780146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.780168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.798 [2024-11-18 08:07:27.780185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:37.798 [2024-11-18 08:07:27.780208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.798 [2024-11-18 08:07:27.780225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.780247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.799 [2024-11-18 08:07:27.780264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.780301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.799 [2024-11-18 08:07:27.780318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.780341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.799 [2024-11-18 08:07:27.780371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.780393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.799 [2024-11-18 08:07:27.780408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.780429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.799 [2024-11-18 08:07:27.780445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.780466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.799 [2024-11-18 08:07:27.780507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.780538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.799 [2024-11-18 08:07:27.780554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.780576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.799 [2024-11-18 08:07:27.780592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.780613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.799 [2024-11-18 08:07:27.780629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.780693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:28400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.799 [2024-11-18 08:07:27.780714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.780738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:28344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.799 [2024-11-18 08:07:27.780754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.780776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.799 [2024-11-18 08:07:27.780793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.780815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.799 [2024-11-18 08:07:27.780832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.780854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.799 [2024-11-18 08:07:27.780870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.780892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.799 [2024-11-18 08:07:27.780910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.780945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.799 [2024-11-18 08:07:27.780964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.799 [2024-11-18 08:07:27.781022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.799 [2024-11-18 08:07:27.781060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.799 [2024-11-18 08:07:27.781100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.799 [2024-11-18 08:07:27.781138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.799 [2024-11-18 08:07:27.781176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.799 [2024-11-18 08:07:27.781234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.799 [2024-11-18 08:07:27.781272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.799 [2024-11-18 08:07:27.781309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:28552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.799 [2024-11-18 08:07:27.781346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:28584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.799 [2024-11-18 08:07:27.781384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.799 [2024-11-18 08:07:27.781421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:27680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.799 [2024-11-18 08:07:27.781459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:27744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.799 [2024-11-18 08:07:27.781523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.799 [2024-11-18 08:07:27.781567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.799 [2024-11-18 08:07:27.781605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.799 [2024-11-18 08:07:27.781644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.799 [2024-11-18 08:07:27.781683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.799 [2024-11-18 08:07:27.781725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.799 [2024-11-18 08:07:27.781779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:28128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.799 [2024-11-18 08:07:27.781817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.799 [2024-11-18 08:07:27.781853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:28248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.799 [2024-11-18 08:07:27.781889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:37.799 [2024-11-18 08:07:27.781910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.800 [2024-11-18 08:07:27.781925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.784751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.784787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.784831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.784849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.784886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.784903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.784923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.784939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.784960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.784976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.784997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.785012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.785049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.785094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.785148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.785186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.785281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.785320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.800 [2024-11-18 08:07:27.785358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.800 [2024-11-18 08:07:27.785396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.800 [2024-11-18 08:07:27.785434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:28696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.800 [2024-11-18 08:07:27.785472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:28144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.785537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.785591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.785631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.785676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.785714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.785752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.785807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.785844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.785896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.785939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.800 [2024-11-18 08:07:27.785976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.785997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.786012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.786033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.800 [2024-11-18 08:07:27.786096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.786137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.786154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.786176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.786192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.786214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.800 [2024-11-18 08:07:27.786235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.786257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.786274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.786296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:28552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.800 [2024-11-18 08:07:27.786313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.786334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.800 [2024-11-18 08:07:27.786351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.786373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.800 [2024-11-18 08:07:27.786390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.786412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.800 [2024-11-18 08:07:27.786428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.786450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-11-18 08:07:27.786466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:37.800 [2024-11-18 08:07:27.786488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.801 [2024-11-18 08:07:27.786528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.786552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.801 [2024-11-18 08:07:27.786569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.786590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.801 [2024-11-18 08:07:27.786606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.786668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-11-18 08:07:27.786689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.786712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-11-18 08:07:27.786730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.786754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-11-18 08:07:27.786776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.787482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:28568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-11-18 08:07:27.787514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.787544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-11-18 08:07:27.787563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.787586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.801 [2024-11-18 08:07:27.787603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.787640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:28728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.801 [2024-11-18 08:07:27.787657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.787678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:28744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.801 [2024-11-18 08:07:27.787694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.787715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.801 [2024-11-18 08:07:27.787731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.787752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:28776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.801 [2024-11-18 08:07:27.787768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.787811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.801 [2024-11-18 08:07:27.787827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.787848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.801 [2024-11-18 08:07:27.787863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.787884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:28824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.801 [2024-11-18 08:07:27.787899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.787920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:27776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-11-18 08:07:27.787935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.787956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-11-18 08:07:27.787976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.787998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-11-18 08:07:27.788013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.788034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-11-18 08:07:27.788049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.788070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.801 [2024-11-18 08:07:27.788101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.788124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:28856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.801 [2024-11-18 08:07:27.788140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.788161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.801 [2024-11-18 08:07:27.788176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.788198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.801 [2024-11-18 08:07:27.788214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.788235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:28904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.801 [2024-11-18 08:07:27.788250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.788272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-11-18 08:07:27.788288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.790189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-11-18 08:07:27.790213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.790241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:28688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-11-18 08:07:27.790258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.790280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-11-18 08:07:27.790297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.790319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-11-18 08:07:27.790335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.790362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-11-18 08:07:27.790379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.790400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-11-18 08:07:27.790416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.790438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-11-18 08:07:27.790453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:37.801 [2024-11-18 08:07:27.790512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-11-18 08:07:27.790533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.790556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:28664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.802 [2024-11-18 08:07:27.790573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.790595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.802 [2024-11-18 08:07:27.790612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.790634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.802 [2024-11-18 08:07:27.790651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.790673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.802 [2024-11-18 08:07:27.790689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.790711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.802 [2024-11-18 08:07:27.790727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.790749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.802 [2024-11-18 08:07:27.790765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.790787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.802 [2024-11-18 08:07:27.790818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.790841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.802 [2024-11-18 08:07:27.790857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.790894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.802 [2024-11-18 08:07:27.790926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.790950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.802 [2024-11-18 08:07:27.790967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.790990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.802 [2024-11-18 08:07:27.791006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.791028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.802 [2024-11-18 08:07:27.791044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.791067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.802 [2024-11-18 08:07:27.791083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.791105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.802 [2024-11-18 08:07:27.791122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.791145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.802 [2024-11-18 08:07:27.791161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.791184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.802 [2024-11-18 08:07:27.791215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.791238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.802 [2024-11-18 08:07:27.791254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.791275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.802 [2024-11-18 08:07:27.791291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.791313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:28728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.802 [2024-11-18 08:07:27.791329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.791350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.802 [2024-11-18 08:07:27.791366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.791388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.802 [2024-11-18 08:07:27.791407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.791430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:28824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.802 [2024-11-18 08:07:27.791446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.791467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.802 [2024-11-18 08:07:27.791509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.791544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.802 [2024-11-18 08:07:27.791561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.791583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.802 [2024-11-18 08:07:27.791600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.791622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:28888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.802 [2024-11-18 08:07:27.791639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.791662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.802 [2024-11-18 08:07:27.791678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.793166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.802 [2024-11-18 08:07:27.793191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.793220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.802 [2024-11-18 08:07:27.793238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.793261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.802 [2024-11-18 08:07:27.793278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.793300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.802 [2024-11-18 08:07:27.793332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.793355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.802 [2024-11-18 08:07:27.793371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.793409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.802 [2024-11-18 08:07:27.793431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.793454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.802 [2024-11-18 08:07:27.793486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.793520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.802 [2024-11-18 08:07:27.793540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.793562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.802 [2024-11-18 08:07:27.793579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.793603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.802 [2024-11-18 08:07:27.793620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.794068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.802 [2024-11-18 08:07:27.794092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:37.802 [2024-11-18 08:07:27.794119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.803 [2024-11-18 08:07:27.794138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.794162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.803 [2024-11-18 08:07:27.794193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.794216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.803 [2024-11-18 08:07:27.794232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.794270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.803 [2024-11-18 08:07:27.794287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.794308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.803 [2024-11-18 08:07:27.794324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.794359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.803 [2024-11-18 08:07:27.794377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.794400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.803 [2024-11-18 08:07:27.794417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.794445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.803 [2024-11-18 08:07:27.794463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.794485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.803 [2024-11-18 08:07:27.794512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.794535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.803 [2024-11-18 08:07:27.794552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.794575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.803 [2024-11-18 08:07:27.794592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.796373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.803 [2024-11-18 08:07:27.796397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.796423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.803 [2024-11-18 08:07:27.796441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.796463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.803 [2024-11-18 08:07:27.796505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.796548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.803 [2024-11-18 08:07:27.796567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.796589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.803 [2024-11-18 08:07:27.796606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.796628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.803 [2024-11-18 08:07:27.796645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.796667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.803 [2024-11-18 08:07:27.796683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.796706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.803 [2024-11-18 08:07:27.796724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.796753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.803 [2024-11-18 08:07:27.796771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.796809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.803 [2024-11-18 08:07:27.796826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.796863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.803 [2024-11-18 08:07:27.796880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.796901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.803 [2024-11-18 08:07:27.796917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.796938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.803 [2024-11-18 08:07:27.796953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.796975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.803 [2024-11-18 08:07:27.796991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.797011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.803 [2024-11-18 08:07:27.797027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.797048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.803 [2024-11-18 08:07:27.797064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.797085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.803 [2024-11-18 08:07:27.797115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.797136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.803 [2024-11-18 08:07:27.797152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.797173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.803 [2024-11-18 08:07:27.797189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.797209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.803 [2024-11-18 08:07:27.797225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.797245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.803 [2024-11-18 08:07:27.797264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.797286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.803 [2024-11-18 08:07:27.797301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.797322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.803 [2024-11-18 08:07:27.797337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.797358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.803 [2024-11-18 08:07:27.797374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.797395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.803 [2024-11-18 08:07:27.797410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.797430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.803 [2024-11-18 08:07:27.797445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.797466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.803 [2024-11-18 08:07:27.797506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:37.803 [2024-11-18 08:07:27.797531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.803 [2024-11-18 08:07:27.797547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.797570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.804 [2024-11-18 08:07:27.797587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.797610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.804 [2024-11-18 08:07:27.797627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.797649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.804 [2024-11-18 08:07:27.797665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.797688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.804 [2024-11-18 08:07:27.797706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.797729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.804 [2024-11-18 08:07:27.797749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.797772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.804 [2024-11-18 08:07:27.797804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.797826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.804 [2024-11-18 08:07:27.797857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.797879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.804 [2024-11-18 08:07:27.797894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.797914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.804 [2024-11-18 08:07:27.797930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.797951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.804 [2024-11-18 08:07:27.797966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.799556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.804 [2024-11-18 08:07:27.799582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.799610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.804 [2024-11-18 08:07:27.799629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.799652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.804 [2024-11-18 08:07:27.799669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.799693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.804 [2024-11-18 08:07:27.799710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.799733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.804 [2024-11-18 08:07:27.799750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.799788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.804 [2024-11-18 08:07:27.799806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.799829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.804 [2024-11-18 08:07:27.799860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.799887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.804 [2024-11-18 08:07:27.799919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.799940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.804 [2024-11-18 08:07:27.799956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.799977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.804 [2024-11-18 08:07:27.799993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.801537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.804 [2024-11-18 08:07:27.801562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.801590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.804 [2024-11-18 08:07:27.801608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.801631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.804 [2024-11-18 08:07:27.801648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.801671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.804 [2024-11-18 08:07:27.801688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.801710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.804 [2024-11-18 08:07:27.801728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.801750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.804 [2024-11-18 08:07:27.801767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.801805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.804 [2024-11-18 08:07:27.801822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.801857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.804 [2024-11-18 08:07:27.801874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.801896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.804 [2024-11-18 08:07:27.801911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.801937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.804 [2024-11-18 08:07:27.801953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.801974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.804 [2024-11-18 08:07:27.801989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.802010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.804 [2024-11-18 08:07:27.802025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.802045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.804 [2024-11-18 08:07:27.802061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.802081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.804 [2024-11-18 08:07:27.802097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.802117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:27744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.804 [2024-11-18 08:07:27.802132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.802153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.804 [2024-11-18 08:07:27.802168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.802189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.804 [2024-11-18 08:07:27.802204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.802276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.804 [2024-11-18 08:07:27.802312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:37.804 [2024-11-18 08:07:27.802335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.805 [2024-11-18 08:07:27.802351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.802383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.805 [2024-11-18 08:07:27.802399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.802421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.805 [2024-11-18 08:07:27.802446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.802487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:28912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.805 [2024-11-18 08:07:27.802522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.802548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.802565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.802587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.805 [2024-11-18 08:07:27.802605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.802628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.802645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.802668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.802686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.802709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.805 [2024-11-18 08:07:27.802726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.802748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.805 [2024-11-18 08:07:27.802765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.802802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.802818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.802839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.802854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.802875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.805 [2024-11-18 08:07:27.802891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.802911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.802927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.802947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.802962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.802983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.803002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.803023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.805 [2024-11-18 08:07:27.803039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.803059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.805 [2024-11-18 08:07:27.803075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.803095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.805 [2024-11-18 08:07:27.803110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.803131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.805 [2024-11-18 08:07:27.803146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.803167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.805 [2024-11-18 08:07:27.803183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.803204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.805 [2024-11-18 08:07:27.803220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.803240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.805 [2024-11-18 08:07:27.803255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.803277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.805 [2024-11-18 08:07:27.803292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.803313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.805 [2024-11-18 08:07:27.803328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.803349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.803364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.803385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.803401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.803421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.803440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.803461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.803506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.806325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.806354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.806383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.806403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.806426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.806445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.806467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.806485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.806518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.806547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.806570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.806587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.806610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.806627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.806650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.806667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.806691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.806708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.805 [2024-11-18 08:07:27.806731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.805 [2024-11-18 08:07:27.806749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.806771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.806 [2024-11-18 08:07:27.806788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.806832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.806850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.806872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.806888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.806910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:29288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.806926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.807041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.806 [2024-11-18 08:07:27.807080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.806 [2024-11-18 08:07:27.807119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.807158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.807197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.807235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.807288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.807326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.807364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.807407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.807444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.807507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.807550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.806 [2024-11-18 08:07:27.807591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.807631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.806 [2024-11-18 08:07:27.807670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.806 [2024-11-18 08:07:27.807751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.806 [2024-11-18 08:07:27.807806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.807849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.807888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.807941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.807963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.807985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.808009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.806 [2024-11-18 08:07:27.808025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.808047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.806 [2024-11-18 08:07:27.808063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.808086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.808103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.808124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:29368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.808140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.808177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.808193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.808216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.808231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.808253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.806 [2024-11-18 08:07:27.808268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.808289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.806 [2024-11-18 08:07:27.808305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.808326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.806 [2024-11-18 08:07:27.808342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.808363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.806 [2024-11-18 08:07:27.808379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.809386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.806 [2024-11-18 08:07:27.809410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.809453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.806 [2024-11-18 08:07:27.809475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.809561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.806 [2024-11-18 08:07:27.809581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:37.806 [2024-11-18 08:07:27.809605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.806 [2024-11-18 08:07:27.809622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.809644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.809661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.809684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.809702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.809725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.809742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.809764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.809792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.810209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.810234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.810261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.810280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.810303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.810320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.810345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.810362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.810385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.810403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.810425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.810443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.810471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.810497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.810524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.810545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.810568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.810585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.812028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.812074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.812114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.812153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.812192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.812231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.812311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:29288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.812373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.812412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:28952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.812481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.812547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.812587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.812627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.812667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.812706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.812746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.812799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.812837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.812873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.812909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.812946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.812966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.812986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.813007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.813023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.813044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.813060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.813080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.813096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.813117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.813133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.813154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.813169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.813190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.813205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.813226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.813242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.813263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.813278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.813299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:29360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.813315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.813336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.813351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.813372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.813388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.813409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.813429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.813450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.813481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.813512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.813546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.813570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.813587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.813609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.813626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.813648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.813665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.813688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.813705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.813728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.813745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.813768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.813791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.815701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.815728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.815758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.807 [2024-11-18 08:07:27.815777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.815808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.815868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.815898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.815917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.807 [2024-11-18 08:07:27.815946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.807 [2024-11-18 08:07:27.815965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.815987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.816005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.816028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.816045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.816068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.816086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.816109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.816126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.816165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.816182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.816204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.816221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.816243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.816260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.816282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.816299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.816323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:29688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.816339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.816362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.816378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.816399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.816416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.816457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.816475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.816510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.816529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.816555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.816572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.816594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.816611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.816634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.816651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.816674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.816691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.816714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.816731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.816754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.816797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.816821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.816857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.816881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.816897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.817468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.817501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.817532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.817556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.817585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.817603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.817626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.817643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.817666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.817684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.817707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.817724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.817747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.817764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.817802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.817819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.817840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.817856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.817878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.817895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.817934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.817952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.817975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.817993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.818016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.818034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.818056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.818074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.818096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:29784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.818117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.818166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.818188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.818213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.818230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.818253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.818270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.818293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.818310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.818333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.818350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.819839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.819885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.819913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.819931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.819954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.819979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.820017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.820033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.820054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.820070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.820092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.820108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.820129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.820153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.820176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.820193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.820214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.820231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.820252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.820269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.820291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.820308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.820329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.820345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.820367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.820384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.820405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.820421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.820442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.808 [2024-11-18 08:07:27.820458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.820508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.820528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.820556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.820573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.820596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.820613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.820636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.820653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.820681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.808 [2024-11-18 08:07:27.820698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:37.808 [2024-11-18 08:07:27.820721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.820738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.820760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.820798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.820821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:29624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.820863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.820887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.820902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.820924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.820940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.820961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.820977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.820998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.821013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.821035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.821051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.821072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.821088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.821110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.821126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.821150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.821166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.823958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.824036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.824085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:29840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.824104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.824128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.824146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.824169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.824187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.824210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.824227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.824251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.824268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.824306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.824323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.824345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.824362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.824385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.824402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.824424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.824440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.824463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:29864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.824504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.824531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.824558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.824581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:29928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.824603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.824627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.824644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.824667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.824684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.824708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.824724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.824747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.824764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.824787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.824820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.824843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.824859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.824898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.824914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.824936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.824953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.824975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.824992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.825014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.825030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.825052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.825068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.825091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.825156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.825185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:29664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.825203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.825226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.825243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.825267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.825283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.825306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.825323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.825360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.825379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.825402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.825418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.825442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.825459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.825482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.825509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.825544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.825561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.825584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.825601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.825625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.825641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.825665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.809 [2024-11-18 08:07:27.825682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.827057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.827084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.827131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.827153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.827177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.827195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.827218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.827236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.827259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.809 [2024-11-18 08:07:27.827291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:37.809 [2024-11-18 08:07:27.827315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.810 [2024-11-18 08:07:27.827332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:37.810 [2024-11-18 08:07:27.827354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.810 [2024-11-18 08:07:27.827371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:37.810 [2024-11-18 08:07:27.827393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.810 [2024-11-18 08:07:27.827409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:37.810 [2024-11-18 08:07:27.827431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.810 [2024-11-18 08:07:27.827448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:37.810 [2024-11-18 08:07:27.827485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.810 [2024-11-18 08:07:27.827516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:37.810 [2024-11-18 08:07:27.827545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.810 [2024-11-18 08:07:27.827563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:37.810 [2024-11-18 08:07:27.827586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.810 [2024-11-18 08:07:27.827603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:37.810 [2024-11-18 08:07:27.827632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.810 [2024-11-18 08:07:27.827650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:37.810 [2024-11-18 08:07:27.827672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.810 [2024-11-18 08:07:27.827689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:37.810 [2024-11-18 08:07:27.827712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.810 [2024-11-18 08:07:27.827730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:37.810 [2024-11-18 08:07:27.827752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.810 [2024-11-18 08:07:27.827770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:37.810 [2024-11-18 08:07:27.827820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.810 [2024-11-18 08:07:27.827836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:37.810 [2024-11-18 08:07:27.827875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.810 [2024-11-18 08:07:27.827891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.810 [2024-11-18 08:07:27.827913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.810 [2024-11-18 08:07:27.827929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:37.810 [2024-11-18 08:07:27.827951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.810 [2024-11-18 08:07:27.827966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:37.810 [2024-11-18 08:07:27.828045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.810 [2024-11-18 08:07:27.828081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:37.810 7872.66 IOPS, 30.75 MiB/s [2024-11-18T07:07:30.898Z] 7879.36 IOPS, 30.78 MiB/s [2024-11-18T07:07:30.898Z] 7885.41 IOPS, 30.80 MiB/s [2024-11-18T07:07:30.898Z] Received shutdown signal, test time was about 34.407182 seconds 00:33:37.810 00:33:37.810 Latency(us) 00:33:37.810 [2024-11-18T07:07:30.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:37.810 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:37.810 Verification LBA range: start 0x0 length 0x4000 00:33:37.810 Nvme0n1 : 34.41 7881.83 30.79 0.00 0.00 16212.41 661.43 4026531.84 00:33:37.810 [2024-11-18T07:07:30.898Z] =================================================================================================================== 00:33:37.810 [2024-11-18T07:07:30.898Z] Total : 7881.83 30.79 0.00 0.00 16212.41 661.43 4026531.84 00:33:37.810 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:38.071 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:38.071 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:38.071 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:38.071 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:38.071 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:38.071 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:38.071 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:38.071 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:38.071 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:38.071 rmmod nvme_tcp 00:33:38.071 rmmod nvme_fabrics 00:33:38.071 rmmod nvme_keyring 00:33:38.071 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:38.071 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:38.071 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:38.071 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 862256 ']' 00:33:38.071 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 862256 00:33:38.071 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 862256 ']' 00:33:38.071 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 862256 00:33:38.071 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:38.071 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:38.071 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 862256 00:33:38.071 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:38.071 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:38.071 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 862256' 00:33:38.071 killing process with pid 862256 00:33:38.071 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 862256 00:33:38.071 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 862256 00:33:38.331 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:38.331 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:38.331 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:38.331 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:38.331 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:33:38.331 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:38.331 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:33:38.331 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:38.331 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:38.331 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.331 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:38.331 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.241 08:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:40.241 00:33:40.241 real 0m43.298s 00:33:40.241 user 2m10.075s 00:33:40.241 sys 0m11.746s 00:33:40.241 08:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:40.241 08:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:40.241 ************************************ 00:33:40.241 END TEST nvmf_host_multipath_status 00:33:40.241 ************************************ 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.501 ************************************ 00:33:40.501 START TEST nvmf_discovery_remove_ifc 00:33:40.501 ************************************ 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:40.501 * Looking for test storage... 00:33:40.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:40.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.501 --rc genhtml_branch_coverage=1 00:33:40.501 --rc genhtml_function_coverage=1 00:33:40.501 --rc genhtml_legend=1 00:33:40.501 --rc geninfo_all_blocks=1 00:33:40.501 --rc geninfo_unexecuted_blocks=1 00:33:40.501 00:33:40.501 ' 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:40.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.501 --rc genhtml_branch_coverage=1 00:33:40.501 --rc genhtml_function_coverage=1 00:33:40.501 --rc genhtml_legend=1 00:33:40.501 --rc geninfo_all_blocks=1 00:33:40.501 --rc geninfo_unexecuted_blocks=1 00:33:40.501 00:33:40.501 ' 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:40.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.501 --rc genhtml_branch_coverage=1 00:33:40.501 --rc genhtml_function_coverage=1 00:33:40.501 --rc genhtml_legend=1 00:33:40.501 --rc geninfo_all_blocks=1 00:33:40.501 --rc geninfo_unexecuted_blocks=1 00:33:40.501 00:33:40.501 ' 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:40.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.501 --rc genhtml_branch_coverage=1 00:33:40.501 --rc genhtml_function_coverage=1 00:33:40.501 --rc genhtml_legend=1 00:33:40.501 --rc geninfo_all_blocks=1 00:33:40.501 --rc geninfo_unexecuted_blocks=1 00:33:40.501 00:33:40.501 ' 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.501 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:40.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:40.502 08:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:43.041 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:43.041 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:43.042 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:43.042 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:43.042 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:43.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:43.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:33:43.042 00:33:43.042 --- 10.0.0.2 ping statistics --- 00:33:43.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.042 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:43.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:43.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:33:43.042 00:33:43.042 --- 10.0.0.1 ping statistics --- 00:33:43.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.042 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=868887 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 868887 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 868887 ']' 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:43.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:43.042 08:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.042 [2024-11-18 08:07:35.855228] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:33:43.042 [2024-11-18 08:07:35.855318] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:43.042 [2024-11-18 08:07:35.931945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.042 [2024-11-18 08:07:35.978663] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:43.042 [2024-11-18 08:07:35.978719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:43.042 [2024-11-18 08:07:35.978735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:43.042 [2024-11-18 08:07:35.978746] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:43.043 [2024-11-18 08:07:35.978758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:43.043 [2024-11-18 08:07:35.979365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:43.043 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:43.043 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:43.043 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:43.043 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:43.043 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.043 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:43.043 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:43.043 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.043 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.043 [2024-11-18 08:07:36.128568] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:43.302 [2024-11-18 08:07:36.136758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:43.302 null0 00:33:43.302 [2024-11-18 08:07:36.168672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:43.302 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.302 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=869028 00:33:43.302 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:43.302 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 869028 /tmp/host.sock 00:33:43.302 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 869028 ']' 00:33:43.302 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:43.302 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:43.302 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:43.302 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:43.302 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:43.302 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.302 [2024-11-18 08:07:36.234857] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:33:43.302 [2024-11-18 08:07:36.234943] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869028 ] 00:33:43.302 [2024-11-18 08:07:36.302049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.302 [2024-11-18 08:07:36.347593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:43.561 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:43.561 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:43.561 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:43.561 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:43.561 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.561 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.561 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.561 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:43.561 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.561 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.561 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.561 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:43.561 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.561 08:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:44.938 [2024-11-18 08:07:37.595922] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:44.938 [2024-11-18 08:07:37.595946] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:44.938 [2024-11-18 08:07:37.595967] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:44.938 [2024-11-18 08:07:37.722382] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:44.938 [2024-11-18 08:07:37.777096] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:44.938 [2024-11-18 08:07:37.778025] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x817370:1 started. 00:33:44.938 [2024-11-18 08:07:37.779694] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:44.938 [2024-11-18 08:07:37.779755] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:44.938 [2024-11-18 08:07:37.779803] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:44.938 [2024-11-18 08:07:37.779827] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:44.938 [2024-11-18 08:07:37.779852] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:44.938 [2024-11-18 08:07:37.784498] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x817370 was disconnected and freed. delete nvme_qpair. 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:44.938 08:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:45.876 08:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:45.876 08:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:45.876 08:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:45.876 08:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.876 08:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:45.876 08:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:45.876 08:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:45.876 08:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.136 08:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:46.136 08:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:47.076 08:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:47.076 08:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:47.076 08:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:47.076 08:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.076 08:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:47.076 08:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:47.076 08:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:47.076 08:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.076 08:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:47.076 08:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:48.013 08:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:48.013 08:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:48.013 08:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:48.013 08:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.013 08:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:48.013 08:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:48.013 08:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:48.013 08:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.013 08:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:48.013 08:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:48.992 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:48.992 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:48.992 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:48.992 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.992 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:48.992 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:48.992 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:48.992 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.252 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:49.252 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:50.194 08:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:50.194 08:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:50.194 08:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:50.194 08:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.194 08:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:50.194 08:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:50.194 08:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:50.194 08:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.194 08:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:50.194 08:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:50.194 [2024-11-18 08:07:43.220981] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:50.194 [2024-11-18 08:07:43.221072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.194 [2024-11-18 08:07:43.221095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.194 [2024-11-18 08:07:43.221115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.195 [2024-11-18 08:07:43.221129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.195 [2024-11-18 08:07:43.221142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.195 [2024-11-18 08:07:43.221156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.195 [2024-11-18 08:07:43.221169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.195 [2024-11-18 08:07:43.221182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.195 [2024-11-18 08:07:43.221197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.195 [2024-11-18 08:07:43.221210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.195 [2024-11-18 08:07:43.221224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3bc0 is same with the state(6) to be set 00:33:50.195 [2024-11-18 08:07:43.231001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f3bc0 (9): Bad file descriptor 00:33:50.195 [2024-11-18 08:07:43.241045] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:50.195 [2024-11-18 08:07:43.241067] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:50.195 [2024-11-18 08:07:43.241077] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:50.195 [2024-11-18 08:07:43.241086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:50.195 [2024-11-18 08:07:43.241137] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:51.140 08:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:51.140 08:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:51.140 08:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:51.140 08:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.140 08:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:51.140 08:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:51.140 08:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:51.424 [2024-11-18 08:07:44.304530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:51.424 [2024-11-18 08:07:44.304609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3bc0 with addr=10.0.0.2, port=4420 00:33:51.424 [2024-11-18 08:07:44.304639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3bc0 is same with the state(6) to be set 00:33:51.424 [2024-11-18 08:07:44.304689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f3bc0 (9): Bad file descriptor 00:33:51.424 [2024-11-18 08:07:44.305125] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:33:51.424 [2024-11-18 08:07:44.305170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:51.424 [2024-11-18 08:07:44.305187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:51.424 [2024-11-18 08:07:44.305205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:51.424 [2024-11-18 08:07:44.305219] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:51.424 [2024-11-18 08:07:44.305231] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:51.424 [2024-11-18 08:07:44.305240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:51.424 [2024-11-18 08:07:44.305255] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:51.424 [2024-11-18 08:07:44.305264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:51.424 08:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.424 08:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:51.424 08:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:52.383 [2024-11-18 08:07:45.307761] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:52.383 [2024-11-18 08:07:45.307829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:52.383 [2024-11-18 08:07:45.307858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:52.383 [2024-11-18 08:07:45.307899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:52.383 [2024-11-18 08:07:45.307915] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:33:52.383 [2024-11-18 08:07:45.307929] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:52.383 [2024-11-18 08:07:45.307940] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:52.383 [2024-11-18 08:07:45.307948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:52.383 [2024-11-18 08:07:45.307993] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:52.383 [2024-11-18 08:07:45.308064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.383 [2024-11-18 08:07:45.308087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.383 [2024-11-18 08:07:45.308107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.383 [2024-11-18 08:07:45.308121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.383 [2024-11-18 08:07:45.308136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.383 [2024-11-18 08:07:45.308148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.383 [2024-11-18 08:07:45.308162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.383 [2024-11-18 08:07:45.308175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.383 [2024-11-18 08:07:45.308190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.383 [2024-11-18 08:07:45.308204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.383 [2024-11-18 08:07:45.308218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:33:52.383 [2024-11-18 08:07:45.308269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e32d0 (9): Bad file descriptor 00:33:52.383 [2024-11-18 08:07:45.309265] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:52.383 [2024-11-18 08:07:45.309303] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:33:52.383 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:52.384 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:52.384 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:52.384 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.384 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:52.384 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:52.384 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:52.384 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.384 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:52.384 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:52.384 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:52.384 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:52.384 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:52.384 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:52.384 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:52.384 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.384 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:52.384 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:52.384 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:52.384 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.384 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:52.384 08:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:53.763 08:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:53.763 08:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:53.763 08:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:53.763 08:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.763 08:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:53.763 08:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:53.763 08:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:53.763 08:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.763 08:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:53.763 08:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:54.334 [2024-11-18 08:07:47.361706] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:54.334 [2024-11-18 08:07:47.361744] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:54.334 [2024-11-18 08:07:47.361784] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:54.593 [2024-11-18 08:07:47.448049] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:54.593 08:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:54.593 08:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:54.593 08:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:54.593 08:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.593 08:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:54.593 08:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:54.593 08:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:54.593 08:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.593 08:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:54.593 08:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:54.593 [2024-11-18 08:07:47.623189] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:33:54.593 [2024-11-18 08:07:47.624085] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x7f5f20:1 started. 00:33:54.593 [2024-11-18 08:07:47.625462] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:54.593 [2024-11-18 08:07:47.625532] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:54.593 [2024-11-18 08:07:47.625567] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:54.593 [2024-11-18 08:07:47.625590] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:54.593 [2024-11-18 08:07:47.625606] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:54.593 [2024-11-18 08:07:47.630474] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x7f5f20 was disconnected and freed. delete nvme_qpair. 00:33:55.531 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:55.531 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:55.531 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.531 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:55.531 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:55.531 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:55.531 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:55.531 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.531 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:55.531 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:55.531 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 869028 00:33:55.531 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 869028 ']' 00:33:55.531 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 869028 00:33:55.531 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:55.531 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:55.531 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 869028 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 869028' 00:33:55.790 killing process with pid 869028 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 869028 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 869028 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:55.790 rmmod nvme_tcp 00:33:55.790 rmmod nvme_fabrics 00:33:55.790 rmmod nvme_keyring 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 868887 ']' 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 868887 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 868887 ']' 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 868887 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:55.790 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 868887 00:33:56.049 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:56.049 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:56.049 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 868887' 00:33:56.049 killing process with pid 868887 00:33:56.049 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 868887 00:33:56.049 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 868887 00:33:56.049 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:56.049 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:56.049 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:56.049 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:56.049 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:33:56.049 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:56.049 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:33:56.049 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:56.049 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:56.049 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.049 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.049 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.586 08:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:58.586 00:33:58.586 real 0m17.759s 00:33:58.586 user 0m25.671s 00:33:58.586 sys 0m3.064s 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:58.587 ************************************ 00:33:58.587 END TEST nvmf_discovery_remove_ifc 00:33:58.587 ************************************ 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.587 ************************************ 00:33:58.587 START TEST nvmf_identify_kernel_target 00:33:58.587 ************************************ 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:58.587 * Looking for test storage... 00:33:58.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:58.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.587 --rc genhtml_branch_coverage=1 00:33:58.587 --rc genhtml_function_coverage=1 00:33:58.587 --rc genhtml_legend=1 00:33:58.587 --rc geninfo_all_blocks=1 00:33:58.587 --rc geninfo_unexecuted_blocks=1 00:33:58.587 00:33:58.587 ' 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:58.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.587 --rc genhtml_branch_coverage=1 00:33:58.587 --rc genhtml_function_coverage=1 00:33:58.587 --rc genhtml_legend=1 00:33:58.587 --rc geninfo_all_blocks=1 00:33:58.587 --rc geninfo_unexecuted_blocks=1 00:33:58.587 00:33:58.587 ' 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:58.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.587 --rc genhtml_branch_coverage=1 00:33:58.587 --rc genhtml_function_coverage=1 00:33:58.587 --rc genhtml_legend=1 00:33:58.587 --rc geninfo_all_blocks=1 00:33:58.587 --rc geninfo_unexecuted_blocks=1 00:33:58.587 00:33:58.587 ' 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:58.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.587 --rc genhtml_branch_coverage=1 00:33:58.587 --rc genhtml_function_coverage=1 00:33:58.587 --rc genhtml_legend=1 00:33:58.587 --rc geninfo_all_blocks=1 00:33:58.587 --rc geninfo_unexecuted_blocks=1 00:33:58.587 00:33:58.587 ' 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.587 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:58.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:58.588 08:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:00.496 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:00.496 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:00.496 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:00.497 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:00.497 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:00.497 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:00.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:00.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:34:00.756 00:34:00.756 --- 10.0.0.2 ping statistics --- 00:34:00.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.756 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:00.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:00.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:34:00.756 00:34:00.756 --- 10.0.0.1 ping statistics --- 00:34:00.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.756 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:34:00.756 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:00.757 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:00.757 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:00.757 08:07:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:02.141 Waiting for block devices as requested 00:34:02.141 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:02.141 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:02.141 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:02.400 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:02.400 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:02.400 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:02.400 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:02.660 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:02.660 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:02.660 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:02.660 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:02.920 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:02.920 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:02.920 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:03.180 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:03.180 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:03.180 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:03.440 No valid GPT data, bailing 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:03.440 00:34:03.440 Discovery Log Number of Records 2, Generation counter 2 00:34:03.440 =====Discovery Log Entry 0====== 00:34:03.440 trtype: tcp 00:34:03.440 adrfam: ipv4 00:34:03.440 subtype: current discovery subsystem 00:34:03.440 treq: not specified, sq flow control disable supported 00:34:03.440 portid: 1 00:34:03.440 trsvcid: 4420 00:34:03.440 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:03.440 traddr: 10.0.0.1 00:34:03.440 eflags: none 00:34:03.440 sectype: none 00:34:03.440 =====Discovery Log Entry 1====== 00:34:03.440 trtype: tcp 00:34:03.440 adrfam: ipv4 00:34:03.440 subtype: nvme subsystem 00:34:03.440 treq: not specified, sq flow control disable supported 00:34:03.440 portid: 1 00:34:03.440 trsvcid: 4420 00:34:03.440 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:03.440 traddr: 10.0.0.1 00:34:03.440 eflags: none 00:34:03.440 sectype: none 00:34:03.440 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:03.440 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:03.703 ===================================================== 00:34:03.703 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:03.703 ===================================================== 00:34:03.703 Controller Capabilities/Features 00:34:03.703 ================================ 00:34:03.703 Vendor ID: 0000 00:34:03.703 Subsystem Vendor ID: 0000 00:34:03.703 Serial Number: 26ea3c3ea24dc932bd1a 00:34:03.703 Model Number: Linux 00:34:03.703 Firmware Version: 6.8.9-20 00:34:03.703 Recommended Arb Burst: 0 00:34:03.703 IEEE OUI Identifier: 00 00 00 00:34:03.703 Multi-path I/O 00:34:03.703 May have multiple subsystem ports: No 00:34:03.703 May have multiple controllers: No 00:34:03.703 Associated with SR-IOV VF: No 00:34:03.703 Max Data Transfer Size: Unlimited 00:34:03.703 Max Number of Namespaces: 0 00:34:03.703 Max Number of I/O Queues: 1024 00:34:03.703 NVMe Specification Version (VS): 1.3 00:34:03.703 NVMe Specification Version (Identify): 1.3 00:34:03.703 Maximum Queue Entries: 1024 00:34:03.703 Contiguous Queues Required: No 00:34:03.703 Arbitration Mechanisms Supported 00:34:03.703 Weighted Round Robin: Not Supported 00:34:03.703 Vendor Specific: Not Supported 00:34:03.703 Reset Timeout: 7500 ms 00:34:03.703 Doorbell Stride: 4 bytes 00:34:03.703 NVM Subsystem Reset: Not Supported 00:34:03.703 Command Sets Supported 00:34:03.703 NVM Command Set: Supported 00:34:03.703 Boot Partition: Not Supported 00:34:03.703 Memory Page Size Minimum: 4096 bytes 00:34:03.703 Memory Page Size Maximum: 4096 bytes 00:34:03.703 Persistent Memory Region: Not Supported 00:34:03.703 Optional Asynchronous Events Supported 00:34:03.703 Namespace Attribute Notices: Not Supported 00:34:03.703 Firmware Activation Notices: Not Supported 00:34:03.703 ANA Change Notices: Not Supported 00:34:03.703 PLE Aggregate Log Change Notices: Not Supported 00:34:03.703 LBA Status Info Alert Notices: Not Supported 00:34:03.703 EGE Aggregate Log Change Notices: Not Supported 00:34:03.703 Normal NVM Subsystem Shutdown event: Not Supported 00:34:03.703 Zone Descriptor Change Notices: Not Supported 00:34:03.703 Discovery Log Change Notices: Supported 00:34:03.703 Controller Attributes 00:34:03.703 128-bit Host Identifier: Not Supported 00:34:03.703 Non-Operational Permissive Mode: Not Supported 00:34:03.703 NVM Sets: Not Supported 00:34:03.703 Read Recovery Levels: Not Supported 00:34:03.703 Endurance Groups: Not Supported 00:34:03.703 Predictable Latency Mode: Not Supported 00:34:03.703 Traffic Based Keep ALive: Not Supported 00:34:03.703 Namespace Granularity: Not Supported 00:34:03.703 SQ Associations: Not Supported 00:34:03.703 UUID List: Not Supported 00:34:03.703 Multi-Domain Subsystem: Not Supported 00:34:03.703 Fixed Capacity Management: Not Supported 00:34:03.703 Variable Capacity Management: Not Supported 00:34:03.703 Delete Endurance Group: Not Supported 00:34:03.703 Delete NVM Set: Not Supported 00:34:03.703 Extended LBA Formats Supported: Not Supported 00:34:03.703 Flexible Data Placement Supported: Not Supported 00:34:03.703 00:34:03.703 Controller Memory Buffer Support 00:34:03.703 ================================ 00:34:03.703 Supported: No 00:34:03.703 00:34:03.703 Persistent Memory Region Support 00:34:03.703 ================================ 00:34:03.703 Supported: No 00:34:03.703 00:34:03.703 Admin Command Set Attributes 00:34:03.703 ============================ 00:34:03.703 Security Send/Receive: Not Supported 00:34:03.703 Format NVM: Not Supported 00:34:03.703 Firmware Activate/Download: Not Supported 00:34:03.703 Namespace Management: Not Supported 00:34:03.703 Device Self-Test: Not Supported 00:34:03.703 Directives: Not Supported 00:34:03.703 NVMe-MI: Not Supported 00:34:03.703 Virtualization Management: Not Supported 00:34:03.703 Doorbell Buffer Config: Not Supported 00:34:03.703 Get LBA Status Capability: Not Supported 00:34:03.703 Command & Feature Lockdown Capability: Not Supported 00:34:03.703 Abort Command Limit: 1 00:34:03.703 Async Event Request Limit: 1 00:34:03.703 Number of Firmware Slots: N/A 00:34:03.703 Firmware Slot 1 Read-Only: N/A 00:34:03.703 Firmware Activation Without Reset: N/A 00:34:03.703 Multiple Update Detection Support: N/A 00:34:03.703 Firmware Update Granularity: No Information Provided 00:34:03.703 Per-Namespace SMART Log: No 00:34:03.703 Asymmetric Namespace Access Log Page: Not Supported 00:34:03.703 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:03.703 Command Effects Log Page: Not Supported 00:34:03.703 Get Log Page Extended Data: Supported 00:34:03.703 Telemetry Log Pages: Not Supported 00:34:03.703 Persistent Event Log Pages: Not Supported 00:34:03.703 Supported Log Pages Log Page: May Support 00:34:03.703 Commands Supported & Effects Log Page: Not Supported 00:34:03.703 Feature Identifiers & Effects Log Page:May Support 00:34:03.703 NVMe-MI Commands & Effects Log Page: May Support 00:34:03.703 Data Area 4 for Telemetry Log: Not Supported 00:34:03.703 Error Log Page Entries Supported: 1 00:34:03.703 Keep Alive: Not Supported 00:34:03.703 00:34:03.703 NVM Command Set Attributes 00:34:03.703 ========================== 00:34:03.703 Submission Queue Entry Size 00:34:03.703 Max: 1 00:34:03.703 Min: 1 00:34:03.703 Completion Queue Entry Size 00:34:03.703 Max: 1 00:34:03.703 Min: 1 00:34:03.703 Number of Namespaces: 0 00:34:03.703 Compare Command: Not Supported 00:34:03.703 Write Uncorrectable Command: Not Supported 00:34:03.703 Dataset Management Command: Not Supported 00:34:03.703 Write Zeroes Command: Not Supported 00:34:03.703 Set Features Save Field: Not Supported 00:34:03.703 Reservations: Not Supported 00:34:03.703 Timestamp: Not Supported 00:34:03.703 Copy: Not Supported 00:34:03.703 Volatile Write Cache: Not Present 00:34:03.703 Atomic Write Unit (Normal): 1 00:34:03.703 Atomic Write Unit (PFail): 1 00:34:03.703 Atomic Compare & Write Unit: 1 00:34:03.703 Fused Compare & Write: Not Supported 00:34:03.703 Scatter-Gather List 00:34:03.703 SGL Command Set: Supported 00:34:03.703 SGL Keyed: Not Supported 00:34:03.703 SGL Bit Bucket Descriptor: Not Supported 00:34:03.703 SGL Metadata Pointer: Not Supported 00:34:03.703 Oversized SGL: Not Supported 00:34:03.703 SGL Metadata Address: Not Supported 00:34:03.703 SGL Offset: Supported 00:34:03.703 Transport SGL Data Block: Not Supported 00:34:03.703 Replay Protected Memory Block: Not Supported 00:34:03.703 00:34:03.703 Firmware Slot Information 00:34:03.703 ========================= 00:34:03.703 Active slot: 0 00:34:03.703 00:34:03.703 00:34:03.703 Error Log 00:34:03.703 ========= 00:34:03.703 00:34:03.703 Active Namespaces 00:34:03.703 ================= 00:34:03.703 Discovery Log Page 00:34:03.703 ================== 00:34:03.703 Generation Counter: 2 00:34:03.703 Number of Records: 2 00:34:03.703 Record Format: 0 00:34:03.703 00:34:03.703 Discovery Log Entry 0 00:34:03.703 ---------------------- 00:34:03.703 Transport Type: 3 (TCP) 00:34:03.703 Address Family: 1 (IPv4) 00:34:03.703 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:03.703 Entry Flags: 00:34:03.703 Duplicate Returned Information: 0 00:34:03.703 Explicit Persistent Connection Support for Discovery: 0 00:34:03.703 Transport Requirements: 00:34:03.703 Secure Channel: Not Specified 00:34:03.703 Port ID: 1 (0x0001) 00:34:03.703 Controller ID: 65535 (0xffff) 00:34:03.703 Admin Max SQ Size: 32 00:34:03.703 Transport Service Identifier: 4420 00:34:03.703 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:03.703 Transport Address: 10.0.0.1 00:34:03.703 Discovery Log Entry 1 00:34:03.703 ---------------------- 00:34:03.703 Transport Type: 3 (TCP) 00:34:03.703 Address Family: 1 (IPv4) 00:34:03.703 Subsystem Type: 2 (NVM Subsystem) 00:34:03.703 Entry Flags: 00:34:03.703 Duplicate Returned Information: 0 00:34:03.703 Explicit Persistent Connection Support for Discovery: 0 00:34:03.703 Transport Requirements: 00:34:03.703 Secure Channel: Not Specified 00:34:03.703 Port ID: 1 (0x0001) 00:34:03.703 Controller ID: 65535 (0xffff) 00:34:03.704 Admin Max SQ Size: 32 00:34:03.704 Transport Service Identifier: 4420 00:34:03.704 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:03.704 Transport Address: 10.0.0.1 00:34:03.704 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:03.704 get_feature(0x01) failed 00:34:03.704 get_feature(0x02) failed 00:34:03.704 get_feature(0x04) failed 00:34:03.704 ===================================================== 00:34:03.704 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:03.704 ===================================================== 00:34:03.704 Controller Capabilities/Features 00:34:03.704 ================================ 00:34:03.704 Vendor ID: 0000 00:34:03.704 Subsystem Vendor ID: 0000 00:34:03.704 Serial Number: d942f0ca882d5bfb2c0b 00:34:03.704 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:03.704 Firmware Version: 6.8.9-20 00:34:03.704 Recommended Arb Burst: 6 00:34:03.704 IEEE OUI Identifier: 00 00 00 00:34:03.704 Multi-path I/O 00:34:03.704 May have multiple subsystem ports: Yes 00:34:03.704 May have multiple controllers: Yes 00:34:03.704 Associated with SR-IOV VF: No 00:34:03.704 Max Data Transfer Size: Unlimited 00:34:03.704 Max Number of Namespaces: 1024 00:34:03.704 Max Number of I/O Queues: 128 00:34:03.704 NVMe Specification Version (VS): 1.3 00:34:03.704 NVMe Specification Version (Identify): 1.3 00:34:03.704 Maximum Queue Entries: 1024 00:34:03.704 Contiguous Queues Required: No 00:34:03.704 Arbitration Mechanisms Supported 00:34:03.704 Weighted Round Robin: Not Supported 00:34:03.704 Vendor Specific: Not Supported 00:34:03.704 Reset Timeout: 7500 ms 00:34:03.704 Doorbell Stride: 4 bytes 00:34:03.704 NVM Subsystem Reset: Not Supported 00:34:03.704 Command Sets Supported 00:34:03.704 NVM Command Set: Supported 00:34:03.704 Boot Partition: Not Supported 00:34:03.704 Memory Page Size Minimum: 4096 bytes 00:34:03.704 Memory Page Size Maximum: 4096 bytes 00:34:03.704 Persistent Memory Region: Not Supported 00:34:03.704 Optional Asynchronous Events Supported 00:34:03.704 Namespace Attribute Notices: Supported 00:34:03.704 Firmware Activation Notices: Not Supported 00:34:03.704 ANA Change Notices: Supported 00:34:03.704 PLE Aggregate Log Change Notices: Not Supported 00:34:03.704 LBA Status Info Alert Notices: Not Supported 00:34:03.704 EGE Aggregate Log Change Notices: Not Supported 00:34:03.704 Normal NVM Subsystem Shutdown event: Not Supported 00:34:03.704 Zone Descriptor Change Notices: Not Supported 00:34:03.704 Discovery Log Change Notices: Not Supported 00:34:03.704 Controller Attributes 00:34:03.704 128-bit Host Identifier: Supported 00:34:03.704 Non-Operational Permissive Mode: Not Supported 00:34:03.704 NVM Sets: Not Supported 00:34:03.704 Read Recovery Levels: Not Supported 00:34:03.704 Endurance Groups: Not Supported 00:34:03.704 Predictable Latency Mode: Not Supported 00:34:03.704 Traffic Based Keep ALive: Supported 00:34:03.704 Namespace Granularity: Not Supported 00:34:03.704 SQ Associations: Not Supported 00:34:03.704 UUID List: Not Supported 00:34:03.704 Multi-Domain Subsystem: Not Supported 00:34:03.704 Fixed Capacity Management: Not Supported 00:34:03.704 Variable Capacity Management: Not Supported 00:34:03.704 Delete Endurance Group: Not Supported 00:34:03.704 Delete NVM Set: Not Supported 00:34:03.704 Extended LBA Formats Supported: Not Supported 00:34:03.704 Flexible Data Placement Supported: Not Supported 00:34:03.704 00:34:03.704 Controller Memory Buffer Support 00:34:03.704 ================================ 00:34:03.704 Supported: No 00:34:03.704 00:34:03.704 Persistent Memory Region Support 00:34:03.704 ================================ 00:34:03.704 Supported: No 00:34:03.704 00:34:03.704 Admin Command Set Attributes 00:34:03.704 ============================ 00:34:03.704 Security Send/Receive: Not Supported 00:34:03.704 Format NVM: Not Supported 00:34:03.704 Firmware Activate/Download: Not Supported 00:34:03.704 Namespace Management: Not Supported 00:34:03.704 Device Self-Test: Not Supported 00:34:03.704 Directives: Not Supported 00:34:03.704 NVMe-MI: Not Supported 00:34:03.704 Virtualization Management: Not Supported 00:34:03.704 Doorbell Buffer Config: Not Supported 00:34:03.704 Get LBA Status Capability: Not Supported 00:34:03.704 Command & Feature Lockdown Capability: Not Supported 00:34:03.704 Abort Command Limit: 4 00:34:03.704 Async Event Request Limit: 4 00:34:03.704 Number of Firmware Slots: N/A 00:34:03.704 Firmware Slot 1 Read-Only: N/A 00:34:03.704 Firmware Activation Without Reset: N/A 00:34:03.704 Multiple Update Detection Support: N/A 00:34:03.704 Firmware Update Granularity: No Information Provided 00:34:03.704 Per-Namespace SMART Log: Yes 00:34:03.704 Asymmetric Namespace Access Log Page: Supported 00:34:03.704 ANA Transition Time : 10 sec 00:34:03.704 00:34:03.704 Asymmetric Namespace Access Capabilities 00:34:03.704 ANA Optimized State : Supported 00:34:03.704 ANA Non-Optimized State : Supported 00:34:03.704 ANA Inaccessible State : Supported 00:34:03.704 ANA Persistent Loss State : Supported 00:34:03.704 ANA Change State : Supported 00:34:03.704 ANAGRPID is not changed : No 00:34:03.704 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:03.704 00:34:03.704 ANA Group Identifier Maximum : 128 00:34:03.704 Number of ANA Group Identifiers : 128 00:34:03.704 Max Number of Allowed Namespaces : 1024 00:34:03.704 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:03.704 Command Effects Log Page: Supported 00:34:03.704 Get Log Page Extended Data: Supported 00:34:03.704 Telemetry Log Pages: Not Supported 00:34:03.704 Persistent Event Log Pages: Not Supported 00:34:03.704 Supported Log Pages Log Page: May Support 00:34:03.704 Commands Supported & Effects Log Page: Not Supported 00:34:03.704 Feature Identifiers & Effects Log Page:May Support 00:34:03.704 NVMe-MI Commands & Effects Log Page: May Support 00:34:03.704 Data Area 4 for Telemetry Log: Not Supported 00:34:03.704 Error Log Page Entries Supported: 128 00:34:03.704 Keep Alive: Supported 00:34:03.704 Keep Alive Granularity: 1000 ms 00:34:03.704 00:34:03.704 NVM Command Set Attributes 00:34:03.704 ========================== 00:34:03.704 Submission Queue Entry Size 00:34:03.704 Max: 64 00:34:03.704 Min: 64 00:34:03.704 Completion Queue Entry Size 00:34:03.704 Max: 16 00:34:03.704 Min: 16 00:34:03.704 Number of Namespaces: 1024 00:34:03.704 Compare Command: Not Supported 00:34:03.704 Write Uncorrectable Command: Not Supported 00:34:03.704 Dataset Management Command: Supported 00:34:03.704 Write Zeroes Command: Supported 00:34:03.704 Set Features Save Field: Not Supported 00:34:03.704 Reservations: Not Supported 00:34:03.704 Timestamp: Not Supported 00:34:03.704 Copy: Not Supported 00:34:03.704 Volatile Write Cache: Present 00:34:03.704 Atomic Write Unit (Normal): 1 00:34:03.704 Atomic Write Unit (PFail): 1 00:34:03.704 Atomic Compare & Write Unit: 1 00:34:03.704 Fused Compare & Write: Not Supported 00:34:03.704 Scatter-Gather List 00:34:03.704 SGL Command Set: Supported 00:34:03.704 SGL Keyed: Not Supported 00:34:03.704 SGL Bit Bucket Descriptor: Not Supported 00:34:03.704 SGL Metadata Pointer: Not Supported 00:34:03.704 Oversized SGL: Not Supported 00:34:03.704 SGL Metadata Address: Not Supported 00:34:03.704 SGL Offset: Supported 00:34:03.704 Transport SGL Data Block: Not Supported 00:34:03.704 Replay Protected Memory Block: Not Supported 00:34:03.704 00:34:03.704 Firmware Slot Information 00:34:03.704 ========================= 00:34:03.704 Active slot: 0 00:34:03.704 00:34:03.704 Asymmetric Namespace Access 00:34:03.704 =========================== 00:34:03.704 Change Count : 0 00:34:03.704 Number of ANA Group Descriptors : 1 00:34:03.704 ANA Group Descriptor : 0 00:34:03.704 ANA Group ID : 1 00:34:03.704 Number of NSID Values : 1 00:34:03.704 Change Count : 0 00:34:03.704 ANA State : 1 00:34:03.704 Namespace Identifier : 1 00:34:03.704 00:34:03.704 Commands Supported and Effects 00:34:03.704 ============================== 00:34:03.704 Admin Commands 00:34:03.704 -------------- 00:34:03.704 Get Log Page (02h): Supported 00:34:03.704 Identify (06h): Supported 00:34:03.704 Abort (08h): Supported 00:34:03.704 Set Features (09h): Supported 00:34:03.704 Get Features (0Ah): Supported 00:34:03.704 Asynchronous Event Request (0Ch): Supported 00:34:03.704 Keep Alive (18h): Supported 00:34:03.704 I/O Commands 00:34:03.704 ------------ 00:34:03.704 Flush (00h): Supported 00:34:03.704 Write (01h): Supported LBA-Change 00:34:03.704 Read (02h): Supported 00:34:03.704 Write Zeroes (08h): Supported LBA-Change 00:34:03.705 Dataset Management (09h): Supported 00:34:03.705 00:34:03.705 Error Log 00:34:03.705 ========= 00:34:03.705 Entry: 0 00:34:03.705 Error Count: 0x3 00:34:03.705 Submission Queue Id: 0x0 00:34:03.705 Command Id: 0x5 00:34:03.705 Phase Bit: 0 00:34:03.705 Status Code: 0x2 00:34:03.705 Status Code Type: 0x0 00:34:03.705 Do Not Retry: 1 00:34:03.705 Error Location: 0x28 00:34:03.705 LBA: 0x0 00:34:03.705 Namespace: 0x0 00:34:03.705 Vendor Log Page: 0x0 00:34:03.705 ----------- 00:34:03.705 Entry: 1 00:34:03.705 Error Count: 0x2 00:34:03.705 Submission Queue Id: 0x0 00:34:03.705 Command Id: 0x5 00:34:03.705 Phase Bit: 0 00:34:03.705 Status Code: 0x2 00:34:03.705 Status Code Type: 0x0 00:34:03.705 Do Not Retry: 1 00:34:03.705 Error Location: 0x28 00:34:03.705 LBA: 0x0 00:34:03.705 Namespace: 0x0 00:34:03.705 Vendor Log Page: 0x0 00:34:03.705 ----------- 00:34:03.705 Entry: 2 00:34:03.705 Error Count: 0x1 00:34:03.705 Submission Queue Id: 0x0 00:34:03.705 Command Id: 0x4 00:34:03.705 Phase Bit: 0 00:34:03.705 Status Code: 0x2 00:34:03.705 Status Code Type: 0x0 00:34:03.705 Do Not Retry: 1 00:34:03.705 Error Location: 0x28 00:34:03.705 LBA: 0x0 00:34:03.705 Namespace: 0x0 00:34:03.705 Vendor Log Page: 0x0 00:34:03.705 00:34:03.705 Number of Queues 00:34:03.705 ================ 00:34:03.705 Number of I/O Submission Queues: 128 00:34:03.705 Number of I/O Completion Queues: 128 00:34:03.705 00:34:03.705 ZNS Specific Controller Data 00:34:03.705 ============================ 00:34:03.705 Zone Append Size Limit: 0 00:34:03.705 00:34:03.705 00:34:03.705 Active Namespaces 00:34:03.705 ================= 00:34:03.705 get_feature(0x05) failed 00:34:03.705 Namespace ID:1 00:34:03.705 Command Set Identifier: NVM (00h) 00:34:03.705 Deallocate: Supported 00:34:03.705 Deallocated/Unwritten Error: Not Supported 00:34:03.705 Deallocated Read Value: Unknown 00:34:03.705 Deallocate in Write Zeroes: Not Supported 00:34:03.705 Deallocated Guard Field: 0xFFFF 00:34:03.705 Flush: Supported 00:34:03.705 Reservation: Not Supported 00:34:03.705 Namespace Sharing Capabilities: Multiple Controllers 00:34:03.705 Size (in LBAs): 1953525168 (931GiB) 00:34:03.705 Capacity (in LBAs): 1953525168 (931GiB) 00:34:03.705 Utilization (in LBAs): 1953525168 (931GiB) 00:34:03.705 UUID: 92e7ed96-5bcc-4723-b6d6-bfbe0d22f633 00:34:03.705 Thin Provisioning: Not Supported 00:34:03.705 Per-NS Atomic Units: Yes 00:34:03.705 Atomic Boundary Size (Normal): 0 00:34:03.705 Atomic Boundary Size (PFail): 0 00:34:03.705 Atomic Boundary Offset: 0 00:34:03.705 NGUID/EUI64 Never Reused: No 00:34:03.705 ANA group ID: 1 00:34:03.705 Namespace Write Protected: No 00:34:03.705 Number of LBA Formats: 1 00:34:03.705 Current LBA Format: LBA Format #00 00:34:03.705 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:03.705 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:03.705 rmmod nvme_tcp 00:34:03.705 rmmod nvme_fabrics 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:03.705 08:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.243 08:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:06.243 08:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:06.244 08:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:06.244 08:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:34:06.244 08:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:06.244 08:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:06.244 08:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:06.244 08:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:06.244 08:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:06.244 08:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:06.244 08:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:07.182 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:07.182 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:07.182 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:07.182 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:07.182 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:07.182 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:07.182 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:07.182 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:07.182 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:07.182 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:07.182 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:07.182 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:07.182 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:07.182 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:07.182 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:07.182 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:08.122 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:08.380 00:34:08.380 real 0m10.060s 00:34:08.380 user 0m2.198s 00:34:08.380 sys 0m3.779s 00:34:08.380 08:08:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:08.380 08:08:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:08.380 ************************************ 00:34:08.380 END TEST nvmf_identify_kernel_target 00:34:08.380 ************************************ 00:34:08.380 08:08:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:08.380 08:08:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:08.380 08:08:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:08.380 08:08:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.381 ************************************ 00:34:08.381 START TEST nvmf_auth_host 00:34:08.381 ************************************ 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:08.381 * Looking for test storage... 00:34:08.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:08.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.381 --rc genhtml_branch_coverage=1 00:34:08.381 --rc genhtml_function_coverage=1 00:34:08.381 --rc genhtml_legend=1 00:34:08.381 --rc geninfo_all_blocks=1 00:34:08.381 --rc geninfo_unexecuted_blocks=1 00:34:08.381 00:34:08.381 ' 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:08.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.381 --rc genhtml_branch_coverage=1 00:34:08.381 --rc genhtml_function_coverage=1 00:34:08.381 --rc genhtml_legend=1 00:34:08.381 --rc geninfo_all_blocks=1 00:34:08.381 --rc geninfo_unexecuted_blocks=1 00:34:08.381 00:34:08.381 ' 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:08.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.381 --rc genhtml_branch_coverage=1 00:34:08.381 --rc genhtml_function_coverage=1 00:34:08.381 --rc genhtml_legend=1 00:34:08.381 --rc geninfo_all_blocks=1 00:34:08.381 --rc geninfo_unexecuted_blocks=1 00:34:08.381 00:34:08.381 ' 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:08.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.381 --rc genhtml_branch_coverage=1 00:34:08.381 --rc genhtml_function_coverage=1 00:34:08.381 --rc genhtml_legend=1 00:34:08.381 --rc geninfo_all_blocks=1 00:34:08.381 --rc geninfo_unexecuted_blocks=1 00:34:08.381 00:34:08.381 ' 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:08.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:08.381 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:08.382 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:08.382 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:08.382 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:08.382 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:08.382 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:08.382 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:08.382 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:08.382 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:08.382 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:08.382 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:08.382 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:08.382 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:08.382 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:08.382 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.382 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:08.639 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.639 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:08.640 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:08.640 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:08.640 08:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:11.175 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:11.175 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:11.175 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:11.175 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:11.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:11.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:34:11.175 00:34:11.175 --- 10.0.0.2 ping statistics --- 00:34:11.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.175 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:34:11.175 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:11.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:11.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:34:11.175 00:34:11.176 --- 10.0.0.1 ping statistics --- 00:34:11.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.176 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=876243 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 876243 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 876243 ']' 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.176 08:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9d982645cb08a72a66fbda159771dbef 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.XjV 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9d982645cb08a72a66fbda159771dbef 0 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9d982645cb08a72a66fbda159771dbef 0 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9d982645cb08a72a66fbda159771dbef 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.XjV 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.XjV 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.XjV 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=adf8cdc6df60181aa75bd6f33bd3d970bc0e5999e88fc368a331e0bf141cff6e 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.q8O 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key adf8cdc6df60181aa75bd6f33bd3d970bc0e5999e88fc368a331e0bf141cff6e 3 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 adf8cdc6df60181aa75bd6f33bd3d970bc0e5999e88fc368a331e0bf141cff6e 3 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=adf8cdc6df60181aa75bd6f33bd3d970bc0e5999e88fc368a331e0bf141cff6e 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.q8O 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.q8O 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.q8O 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=340ce09f217a63fe40afb47dd384256b5b90a9815d3830e3 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.MvO 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 340ce09f217a63fe40afb47dd384256b5b90a9815d3830e3 0 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 340ce09f217a63fe40afb47dd384256b5b90a9815d3830e3 0 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=340ce09f217a63fe40afb47dd384256b5b90a9815d3830e3 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:11.176 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.MvO 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.MvO 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.MvO 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e56cedcbaf2026cf0dde25ccd484a2c31965cec07371ff62 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.xoy 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e56cedcbaf2026cf0dde25ccd484a2c31965cec07371ff62 2 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e56cedcbaf2026cf0dde25ccd484a2c31965cec07371ff62 2 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e56cedcbaf2026cf0dde25ccd484a2c31965cec07371ff62 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.xoy 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.xoy 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.xoy 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=770c96e8cd094a3d544f84a5ef8ff193 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.V6r 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 770c96e8cd094a3d544f84a5ef8ff193 1 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 770c96e8cd094a3d544f84a5ef8ff193 1 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=770c96e8cd094a3d544f84a5ef8ff193 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.V6r 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.V6r 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.V6r 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=414f4248e9dd5b404ce4a26125b0a960 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.njA 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 414f4248e9dd5b404ce4a26125b0a960 1 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 414f4248e9dd5b404ce4a26125b0a960 1 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=414f4248e9dd5b404ce4a26125b0a960 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.njA 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.njA 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.njA 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6b133ed9b9deca55c9069c4fa120486a3be496419a37559c 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.0gi 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6b133ed9b9deca55c9069c4fa120486a3be496419a37559c 2 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6b133ed9b9deca55c9069c4fa120486a3be496419a37559c 2 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6b133ed9b9deca55c9069c4fa120486a3be496419a37559c 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.0gi 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.0gi 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.0gi 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3d21bf59ee08afad5041ae9ea0bb4c02 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.BVf 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3d21bf59ee08afad5041ae9ea0bb4c02 0 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3d21bf59ee08afad5041ae9ea0bb4c02 0 00:34:11.436 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3d21bf59ee08afad5041ae9ea0bb4c02 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.BVf 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.BVf 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.BVf 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c20482e6edc1109753cd8d2417f38362b7e60920f1fa6a23ad3a73e55449c964 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.zXf 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c20482e6edc1109753cd8d2417f38362b7e60920f1fa6a23ad3a73e55449c964 3 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c20482e6edc1109753cd8d2417f38362b7e60920f1fa6a23ad3a73e55449c964 3 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c20482e6edc1109753cd8d2417f38362b7e60920f1fa6a23ad3a73e55449c964 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:11.437 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.696 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.zXf 00:34:11.696 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.zXf 00:34:11.696 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.zXf 00:34:11.696 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:11.696 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 876243 00:34:11.696 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 876243 ']' 00:34:11.696 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.696 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.696 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.696 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.696 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.954 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:11.954 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:11.954 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:11.954 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.XjV 00:34:11.954 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.954 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.954 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.954 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.q8O ]] 00:34:11.954 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.q8O 00:34:11.954 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.954 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.954 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.954 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:11.954 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.MvO 00:34:11.954 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.954 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.954 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.954 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.xoy ]] 00:34:11.954 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xoy 00:34:11.954 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.V6r 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.njA ]] 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.njA 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.0gi 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.BVf ]] 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.BVf 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.zXf 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:11.955 08:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:12.889 Waiting for block devices as requested 00:34:12.889 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:13.148 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:13.148 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:13.406 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:13.406 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:13.406 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:13.406 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:13.664 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:13.664 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:13.664 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:13.664 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:13.922 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:13.922 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:13.922 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:13.922 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:13.922 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:14.179 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:14.438 No valid GPT data, bailing 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:14.438 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:14.697 00:34:14.697 Discovery Log Number of Records 2, Generation counter 2 00:34:14.697 =====Discovery Log Entry 0====== 00:34:14.697 trtype: tcp 00:34:14.697 adrfam: ipv4 00:34:14.697 subtype: current discovery subsystem 00:34:14.697 treq: not specified, sq flow control disable supported 00:34:14.697 portid: 1 00:34:14.697 trsvcid: 4420 00:34:14.697 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:14.697 traddr: 10.0.0.1 00:34:14.697 eflags: none 00:34:14.697 sectype: none 00:34:14.697 =====Discovery Log Entry 1====== 00:34:14.697 trtype: tcp 00:34:14.697 adrfam: ipv4 00:34:14.697 subtype: nvme subsystem 00:34:14.697 treq: not specified, sq flow control disable supported 00:34:14.697 portid: 1 00:34:14.697 trsvcid: 4420 00:34:14.697 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:14.697 traddr: 10.0.0.1 00:34:14.697 eflags: none 00:34:14.697 sectype: none 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: ]] 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.697 nvme0n1 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.697 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.698 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.698 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: ]] 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.956 nvme0n1 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.956 08:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.956 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.956 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.956 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.956 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: ]] 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.215 nvme0n1 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:15.215 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: ]] 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.216 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.473 nvme0n1 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: ]] 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.473 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.732 nvme0n1 00:34:15.732 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.732 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.732 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.732 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.732 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.732 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.733 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.992 nvme0n1 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: ]] 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.992 08:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.253 nvme0n1 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: ]] 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.253 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.512 nvme0n1 00:34:16.512 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.512 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.512 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.512 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.512 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.512 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: ]] 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.513 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.772 nvme0n1 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: ]] 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.772 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.031 nvme0n1 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.031 08:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.032 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.032 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.032 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.032 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.032 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.032 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.032 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.032 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.032 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.032 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.032 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.032 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:17.032 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.032 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.290 nvme0n1 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: ]] 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.290 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.291 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.291 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.291 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.291 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.291 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.291 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.291 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.291 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.291 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:17.291 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.291 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.549 nvme0n1 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:17.549 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: ]] 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.550 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.808 nvme0n1 00:34:17.808 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.808 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.808 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.808 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.808 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.808 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: ]] 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.067 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:18.068 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.068 08:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.326 nvme0n1 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: ]] 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:18.326 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.327 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.585 nvme0n1 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.585 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:18.586 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:18.586 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:18.586 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.586 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:18.586 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.586 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.586 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.586 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.586 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.586 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.586 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.586 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.586 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.586 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.586 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.586 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.586 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.586 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.586 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:18.586 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.586 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.844 nvme0n1 00:34:18.844 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.844 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.844 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.844 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.844 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.844 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: ]] 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.102 08:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.669 nvme0n1 00:34:19.669 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.669 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.669 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.669 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: ]] 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.670 08:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.928 nvme0n1 00:34:19.928 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.928 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.928 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.928 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.928 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.928 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: ]] 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.186 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.751 nvme0n1 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: ]] 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.751 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.752 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.752 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:20.752 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.752 08:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.318 nvme0n1 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.318 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.885 nvme0n1 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: ]] 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.885 08:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.819 nvme0n1 00:34:22.819 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.819 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.819 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.819 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.819 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.819 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: ]] 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.820 08:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.754 nvme0n1 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: ]] 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.754 08:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.790 nvme0n1 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: ]] 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.790 08:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.357 nvme0n1 00:34:25.357 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.357 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.357 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.357 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.357 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.357 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.357 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.357 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.357 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.617 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.618 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.618 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:25.618 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.618 08:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.558 nvme0n1 00:34:26.558 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.558 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.558 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.558 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.558 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.558 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.558 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.558 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.558 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.558 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.558 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: ]] 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.559 nvme0n1 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: ]] 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.559 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.819 nvme0n1 00:34:26.819 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.819 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.819 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.819 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.819 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.819 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.819 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.819 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.819 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: ]] 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.820 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.080 nvme0n1 00:34:27.080 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.080 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.080 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.080 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.080 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: ]] 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.080 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.081 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.081 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.081 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.081 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.081 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.081 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.081 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.081 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.081 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.081 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.081 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:27.081 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.081 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.342 nvme0n1 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.342 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.602 nvme0n1 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: ]] 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.602 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.863 nvme0n1 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: ]] 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.863 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.864 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:27.864 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.864 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.124 nvme0n1 00:34:28.124 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.124 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.124 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.124 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.124 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.124 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.124 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.124 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.124 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.124 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: ]] 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.125 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.384 nvme0n1 00:34:28.384 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.384 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.384 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.384 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.384 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.384 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.384 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.384 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.384 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.384 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.384 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.384 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.384 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:28.384 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.384 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.384 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:28.384 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:28.384 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: ]] 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.385 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.643 nvme0n1 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.643 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.644 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:28.644 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.644 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.902 nvme0n1 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: ]] 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.902 08:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.161 nvme0n1 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: ]] 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.161 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.421 nvme0n1 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: ]] 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.421 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.680 nvme0n1 00:34:29.680 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.680 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.680 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.680 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.680 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.680 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: ]] 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.938 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.939 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.939 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.939 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.939 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:29.939 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.939 08:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.199 nvme0n1 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.199 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.460 nvme0n1 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: ]] 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.460 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.029 nvme0n1 00:34:31.029 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.029 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.029 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.029 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.029 08:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.029 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.029 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.029 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.029 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.029 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.029 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.029 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.029 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:31.029 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.029 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.029 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:31.029 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:31.029 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:31.029 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:31.029 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.029 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: ]] 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.030 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.600 nvme0n1 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: ]] 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.600 08:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.171 nvme0n1 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: ]] 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.171 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.742 nvme0n1 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:32.742 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:32.743 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.743 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:32.743 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.743 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.743 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.743 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.743 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.743 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.743 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.743 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.743 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.743 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.743 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.743 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.743 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.743 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.743 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:32.743 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.743 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.309 nvme0n1 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: ]] 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.309 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.247 nvme0n1 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: ]] 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.247 08:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.183 nvme0n1 00:34:35.183 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.183 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.183 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.183 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.183 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.183 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.183 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.183 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.183 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.183 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.183 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.183 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.183 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:35.183 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.183 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:35.183 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:35.183 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:35.183 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:35.183 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:35.183 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: ]] 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.184 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.122 nvme0n1 00:34:36.122 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.122 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.122 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.122 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.122 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.122 08:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: ]] 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.122 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.061 nvme0n1 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.061 08:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.999 nvme0n1 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: ]] 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.999 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.000 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.000 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.000 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.000 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.000 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.000 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.000 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.000 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:38.000 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.000 08:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.000 nvme0n1 00:34:38.000 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.000 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.000 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.000 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.000 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.000 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.259 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.259 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.259 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.259 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.259 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.259 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.259 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:38.259 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.259 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.259 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:38.259 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:38.259 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:38.259 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:38.259 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.259 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:38.259 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:38.259 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: ]] 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.260 nvme0n1 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.260 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: ]] 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.531 nvme0n1 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.531 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: ]] 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.532 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.792 nvme0n1 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.792 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.053 nvme0n1 00:34:39.053 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.053 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.053 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.053 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.053 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.053 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.053 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.053 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.053 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.053 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.053 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.053 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:39.053 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.053 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:39.053 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.053 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.053 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:39.053 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:39.053 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:39.053 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:39.053 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: ]] 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.054 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.315 nvme0n1 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: ]] 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.315 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.316 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.316 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.316 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.316 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.316 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:39.316 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.316 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.576 nvme0n1 00:34:39.576 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.576 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.576 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.576 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.576 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: ]] 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.577 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.838 nvme0n1 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: ]] 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.838 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.099 nvme0n1 00:34:40.099 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.099 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.099 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.099 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.099 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.099 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.099 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.358 nvme0n1 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: ]] 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.359 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.617 nvme0n1 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: ]] 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:40.617 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.618 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:40.618 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.618 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.618 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.618 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.618 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.618 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.618 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.618 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.618 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.618 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.618 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.618 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.618 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.618 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.618 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:40.618 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.618 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.876 nvme0n1 00:34:40.876 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.876 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.876 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.876 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.876 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.876 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.876 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.876 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.876 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.876 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: ]] 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.135 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.393 nvme0n1 00:34:41.393 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.393 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.393 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.393 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.393 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.393 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.393 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.393 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: ]] 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.394 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.653 nvme0n1 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.653 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.912 nvme0n1 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: ]] 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.912 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.913 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.913 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:41.913 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.913 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.483 nvme0n1 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: ]] 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.483 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.054 nvme0n1 00:34:43.054 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.054 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.054 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.054 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.054 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.054 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.054 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.054 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.054 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.054 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.054 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.054 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.054 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:43.054 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.054 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:43.054 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:43.054 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:43.054 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: ]] 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.055 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.622 nvme0n1 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: ]] 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.622 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.623 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.623 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.623 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.623 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.623 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.623 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.623 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.623 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.623 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.623 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.623 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.623 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.623 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:43.623 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.623 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.189 nvme0n1 00:34:44.189 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.189 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.189 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.189 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.190 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.758 nvme0n1 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:44.758 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5ODI2NDVjYjA4YTcyYTY2ZmJkYTE1OTc3MWRiZWZGGZqG: 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: ]] 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWRmOGNkYzZkZjYwMTgxYWE3NWJkNmYzM2JkM2Q5NzBiYzBlNTk5OWU4OGZjMzY4YTMzMWUwYmYxNDFjZmY2ZUJUEeQ=: 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.759 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.699 nvme0n1 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: ]] 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.699 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.639 nvme0n1 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: ]] 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.639 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.640 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.640 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.640 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.640 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:46.640 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.640 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:46.640 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:46.640 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:46.640 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:46.640 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.640 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.578 nvme0n1 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmIxMzNlZDliOWRlY2E1NWM5MDY5YzRmYTEyMDQ4NmEzYmU0OTY0MTlhMzc1NTljScdW7Q==: 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: ]] 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2QyMWJmNTllZTA4YWZhZDUwNDFhZTllYTBiYjRjMDLV/Eii: 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.578 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.513 nvme0n1 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzIwNDgyZTZlZGMxMTA5NzUzY2Q4ZDI0MTdmMzgzNjJiN2U2MDkyMGYxZmE2YTIzYWQzYTczZTU1NDQ5Yzk2NKLydiw=: 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.513 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.082 nvme0n1 00:34:49.082 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.082 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.082 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.082 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.082 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: ]] 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:49.342 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.343 request: 00:34:49.343 { 00:34:49.343 "name": "nvme0", 00:34:49.343 "trtype": "tcp", 00:34:49.343 "traddr": "10.0.0.1", 00:34:49.343 "adrfam": "ipv4", 00:34:49.343 "trsvcid": "4420", 00:34:49.343 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:49.343 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:49.343 "prchk_reftag": false, 00:34:49.343 "prchk_guard": false, 00:34:49.343 "hdgst": false, 00:34:49.343 "ddgst": false, 00:34:49.343 "allow_unrecognized_csi": false, 00:34:49.343 "method": "bdev_nvme_attach_controller", 00:34:49.343 "req_id": 1 00:34:49.343 } 00:34:49.343 Got JSON-RPC error response 00:34:49.343 response: 00:34:49.343 { 00:34:49.343 "code": -5, 00:34:49.343 "message": "Input/output error" 00:34:49.343 } 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.343 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.603 request: 00:34:49.603 { 00:34:49.603 "name": "nvme0", 00:34:49.603 "trtype": "tcp", 00:34:49.603 "traddr": "10.0.0.1", 00:34:49.603 "adrfam": "ipv4", 00:34:49.603 "trsvcid": "4420", 00:34:49.603 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:49.603 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:49.603 "prchk_reftag": false, 00:34:49.603 "prchk_guard": false, 00:34:49.603 "hdgst": false, 00:34:49.603 "ddgst": false, 00:34:49.603 "dhchap_key": "key2", 00:34:49.603 "allow_unrecognized_csi": false, 00:34:49.603 "method": "bdev_nvme_attach_controller", 00:34:49.603 "req_id": 1 00:34:49.603 } 00:34:49.603 Got JSON-RPC error response 00:34:49.603 response: 00:34:49.603 { 00:34:49.603 "code": -5, 00:34:49.603 "message": "Input/output error" 00:34:49.603 } 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.603 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.603 request: 00:34:49.603 { 00:34:49.603 "name": "nvme0", 00:34:49.603 "trtype": "tcp", 00:34:49.603 "traddr": "10.0.0.1", 00:34:49.603 "adrfam": "ipv4", 00:34:49.603 "trsvcid": "4420", 00:34:49.603 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:49.603 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:49.603 "prchk_reftag": false, 00:34:49.603 "prchk_guard": false, 00:34:49.603 "hdgst": false, 00:34:49.603 "ddgst": false, 00:34:49.603 "dhchap_key": "key1", 00:34:49.603 "dhchap_ctrlr_key": "ckey2", 00:34:49.603 "allow_unrecognized_csi": false, 00:34:49.603 "method": "bdev_nvme_attach_controller", 00:34:49.603 "req_id": 1 00:34:49.603 } 00:34:49.604 Got JSON-RPC error response 00:34:49.604 response: 00:34:49.604 { 00:34:49.604 "code": -5, 00:34:49.604 "message": "Input/output error" 00:34:49.604 } 00:34:49.604 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:49.604 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:49.604 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:49.604 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:49.604 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:49.604 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:49.604 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.604 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.604 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.604 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.604 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.604 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.604 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.604 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.604 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.604 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.604 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:49.604 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.604 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.862 nvme0n1 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: ]] 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.862 request: 00:34:49.862 { 00:34:49.862 "name": "nvme0", 00:34:49.862 "dhchap_key": "key1", 00:34:49.862 "dhchap_ctrlr_key": "ckey2", 00:34:49.862 "method": "bdev_nvme_set_keys", 00:34:49.862 "req_id": 1 00:34:49.862 } 00:34:49.862 Got JSON-RPC error response 00:34:49.862 response: 00:34:49.862 { 00:34:49.862 "code": -13, 00:34:49.862 "message": "Permission denied" 00:34:49.862 } 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:49.862 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQwY2UwOWYyMTdhNjNmZTQwYWZiNDdkZDM4NDI1NmI1YjkwYTk4MTVkMzgzMGUzyHtquw==: 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: ]] 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU2Y2VkY2JhZjIwMjZjZjBkZGUyNWNjZDQ4NGEyYzMxOTY1Y2VjMDczNzFmZjYyCEVymA==: 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.244 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.244 nvme0n1 00:34:51.244 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.244 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:51.244 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcwYzk2ZThjZDA5NGEzZDU0NGY4NGE1ZWY4ZmYxOTPvDI+6: 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: ]] 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDE0ZjQyNDhlOWRkNWI0MDRjZTRhMjYxMjViMGE5NjDTW0CW: 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.245 request: 00:34:51.245 { 00:34:51.245 "name": "nvme0", 00:34:51.245 "dhchap_key": "key2", 00:34:51.245 "dhchap_ctrlr_key": "ckey1", 00:34:51.245 "method": "bdev_nvme_set_keys", 00:34:51.245 "req_id": 1 00:34:51.245 } 00:34:51.245 Got JSON-RPC error response 00:34:51.245 response: 00:34:51.245 { 00:34:51.245 "code": -13, 00:34:51.245 "message": "Permission denied" 00:34:51.245 } 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:51.245 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:52.178 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.178 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:52.178 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.178 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.179 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.436 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:52.436 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:53.373 rmmod nvme_tcp 00:34:53.373 rmmod nvme_fabrics 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 876243 ']' 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 876243 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 876243 ']' 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 876243 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 876243 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 876243' 00:34:53.373 killing process with pid 876243 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 876243 00:34:53.373 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 876243 00:34:53.635 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:53.635 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:53.635 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:53.635 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:53.635 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:34:53.635 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:53.635 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:34:53.635 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:53.635 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:53.635 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.635 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:53.635 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.605 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:55.605 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:55.606 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:55.606 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:55.606 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:55.606 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:34:55.606 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:55.606 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:55.606 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:55.606 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:55.606 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:55.606 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:55.606 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:56.981 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:56.981 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:56.981 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:56.981 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:56.981 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:56.981 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:56.981 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:56.981 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:56.981 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:56.981 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:56.981 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:56.981 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:56.981 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:56.981 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:56.981 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:56.981 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:57.920 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:57.920 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.XjV /tmp/spdk.key-null.MvO /tmp/spdk.key-sha256.V6r /tmp/spdk.key-sha384.0gi /tmp/spdk.key-sha512.zXf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:57.920 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:59.299 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:59.299 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:59.299 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:59.299 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:59.299 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:59.299 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:59.299 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:59.299 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:59.299 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:59.299 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:59.299 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:59.299 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:59.299 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:59.299 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:59.299 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:59.299 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:59.299 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:59.299 00:34:59.299 real 0m51.018s 00:34:59.299 user 0m48.347s 00:34:59.299 sys 0m6.144s 00:34:59.299 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:59.299 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.299 ************************************ 00:34:59.299 END TEST nvmf_auth_host 00:34:59.299 ************************************ 00:34:59.299 08:08:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:59.299 08:08:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:59.299 08:08:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:59.299 08:08:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:59.299 08:08:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.299 ************************************ 00:34:59.299 START TEST nvmf_digest 00:34:59.299 ************************************ 00:34:59.299 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:59.558 * Looking for test storage... 00:34:59.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:59.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:59.558 --rc genhtml_branch_coverage=1 00:34:59.558 --rc genhtml_function_coverage=1 00:34:59.558 --rc genhtml_legend=1 00:34:59.558 --rc geninfo_all_blocks=1 00:34:59.558 --rc geninfo_unexecuted_blocks=1 00:34:59.558 00:34:59.558 ' 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:59.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:59.558 --rc genhtml_branch_coverage=1 00:34:59.558 --rc genhtml_function_coverage=1 00:34:59.558 --rc genhtml_legend=1 00:34:59.558 --rc geninfo_all_blocks=1 00:34:59.558 --rc geninfo_unexecuted_blocks=1 00:34:59.558 00:34:59.558 ' 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:59.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:59.558 --rc genhtml_branch_coverage=1 00:34:59.558 --rc genhtml_function_coverage=1 00:34:59.558 --rc genhtml_legend=1 00:34:59.558 --rc geninfo_all_blocks=1 00:34:59.558 --rc geninfo_unexecuted_blocks=1 00:34:59.558 00:34:59.558 ' 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:59.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:59.558 --rc genhtml_branch_coverage=1 00:34:59.558 --rc genhtml_function_coverage=1 00:34:59.558 --rc genhtml_legend=1 00:34:59.558 --rc geninfo_all_blocks=1 00:34:59.558 --rc geninfo_unexecuted_blocks=1 00:34:59.558 00:34:59.558 ' 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:59.558 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:59.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:34:59.559 08:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:02.096 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:02.096 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:02.096 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:02.097 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:02.097 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:02.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:02.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:35:02.097 00:35:02.097 --- 10.0.0.2 ping statistics --- 00:35:02.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.097 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:02.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:02.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:35:02.097 00:35:02.097 --- 10.0.0.1 ping statistics --- 00:35:02.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.097 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:02.097 ************************************ 00:35:02.097 START TEST nvmf_digest_clean 00:35:02.097 ************************************ 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=885837 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 885837 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 885837 ']' 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:02.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:02.097 08:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:02.097 [2024-11-18 08:08:54.904034] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:02.097 [2024-11-18 08:08:54.904120] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:02.097 [2024-11-18 08:08:54.976439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.097 [2024-11-18 08:08:55.018700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:02.097 [2024-11-18 08:08:55.018760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:02.097 [2024-11-18 08:08:55.018774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:02.098 [2024-11-18 08:08:55.018785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:02.098 [2024-11-18 08:08:55.018795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:02.098 [2024-11-18 08:08:55.019317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.098 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:02.098 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:02.098 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:02.098 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:02.098 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:02.357 null0 00:35:02.357 [2024-11-18 08:08:55.307693] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:02.357 [2024-11-18 08:08:55.331944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=885862 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 885862 /var/tmp/bperf.sock 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 885862 ']' 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:02.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:02.357 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:02.357 [2024-11-18 08:08:55.380921] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:02.357 [2024-11-18 08:08:55.380986] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid885862 ] 00:35:02.616 [2024-11-18 08:08:55.448284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.616 [2024-11-18 08:08:55.494193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.616 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:02.616 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:02.616 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:02.616 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:02.616 08:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:03.185 08:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:03.185 08:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:03.444 nvme0n1 00:35:03.444 08:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:03.444 08:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:03.704 Running I/O for 2 seconds... 00:35:05.580 18589.00 IOPS, 72.61 MiB/s [2024-11-18T07:08:58.668Z] 18586.00 IOPS, 72.60 MiB/s 00:35:05.580 Latency(us) 00:35:05.580 [2024-11-18T07:08:58.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:05.580 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:05.580 nvme0n1 : 2.01 18588.25 72.61 0.00 0.00 6876.42 3470.98 16602.45 00:35:05.580 [2024-11-18T07:08:58.668Z] =================================================================================================================== 00:35:05.580 [2024-11-18T07:08:58.668Z] Total : 18588.25 72.61 0.00 0.00 6876.42 3470.98 16602.45 00:35:05.580 { 00:35:05.580 "results": [ 00:35:05.580 { 00:35:05.580 "job": "nvme0n1", 00:35:05.580 "core_mask": "0x2", 00:35:05.580 "workload": "randread", 00:35:05.580 "status": "finished", 00:35:05.580 "queue_depth": 128, 00:35:05.580 "io_size": 4096, 00:35:05.580 "runtime": 2.006644, 00:35:05.580 "iops": 18588.249834051283, 00:35:05.580 "mibps": 72.61035091426282, 00:35:05.580 "io_failed": 0, 00:35:05.580 "io_timeout": 0, 00:35:05.580 "avg_latency_us": 6876.417347234634, 00:35:05.580 "min_latency_us": 3470.9807407407407, 00:35:05.580 "max_latency_us": 16602.453333333335 00:35:05.580 } 00:35:05.580 ], 00:35:05.580 "core_count": 1 00:35:05.580 } 00:35:05.580 08:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:05.580 08:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:05.580 08:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:05.580 08:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:05.580 08:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:05.580 | select(.opcode=="crc32c") 00:35:05.580 | "\(.module_name) \(.executed)"' 00:35:05.840 08:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:05.840 08:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:05.840 08:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:05.840 08:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:05.840 08:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 885862 00:35:05.840 08:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 885862 ']' 00:35:05.840 08:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 885862 00:35:05.840 08:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:05.840 08:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:05.840 08:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 885862 00:35:05.840 08:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:05.840 08:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:05.840 08:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 885862' 00:35:05.840 killing process with pid 885862 00:35:05.840 08:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 885862 00:35:05.840 Received shutdown signal, test time was about 2.000000 seconds 00:35:05.840 00:35:05.840 Latency(us) 00:35:05.840 [2024-11-18T07:08:58.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:05.840 [2024-11-18T07:08:58.928Z] =================================================================================================================== 00:35:05.840 [2024-11-18T07:08:58.928Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:05.840 08:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 885862 00:35:06.099 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:06.099 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:06.099 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:06.099 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:06.099 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:06.099 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:06.099 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:06.099 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=886388 00:35:06.099 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:06.099 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 886388 /var/tmp/bperf.sock 00:35:06.099 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 886388 ']' 00:35:06.099 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:06.099 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:06.099 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:06.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:06.099 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:06.099 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:06.099 [2024-11-18 08:08:59.145702] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:06.099 [2024-11-18 08:08:59.145810] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid886388 ] 00:35:06.099 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:06.099 Zero copy mechanism will not be used. 00:35:06.356 [2024-11-18 08:08:59.214327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.356 [2024-11-18 08:08:59.258507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:06.356 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:06.356 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:06.356 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:06.356 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:06.356 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:06.923 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:06.923 08:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:07.181 nvme0n1 00:35:07.181 08:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:07.181 08:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:07.440 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:07.440 Zero copy mechanism will not be used. 00:35:07.440 Running I/O for 2 seconds... 00:35:09.308 4743.00 IOPS, 592.88 MiB/s [2024-11-18T07:09:02.396Z] 4769.00 IOPS, 596.12 MiB/s 00:35:09.308 Latency(us) 00:35:09.308 [2024-11-18T07:09:02.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:09.308 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:09.308 nvme0n1 : 2.00 4769.67 596.21 0.00 0.00 3350.50 885.95 12913.02 00:35:09.308 [2024-11-18T07:09:02.396Z] =================================================================================================================== 00:35:09.308 [2024-11-18T07:09:02.396Z] Total : 4769.67 596.21 0.00 0.00 3350.50 885.95 12913.02 00:35:09.308 { 00:35:09.308 "results": [ 00:35:09.308 { 00:35:09.308 "job": "nvme0n1", 00:35:09.308 "core_mask": "0x2", 00:35:09.308 "workload": "randread", 00:35:09.308 "status": "finished", 00:35:09.308 "queue_depth": 16, 00:35:09.308 "io_size": 131072, 00:35:09.308 "runtime": 2.003075, 00:35:09.308 "iops": 4769.666637544775, 00:35:09.308 "mibps": 596.2083296930969, 00:35:09.308 "io_failed": 0, 00:35:09.308 "io_timeout": 0, 00:35:09.308 "avg_latency_us": 3350.5031659417427, 00:35:09.308 "min_latency_us": 885.9496296296296, 00:35:09.308 "max_latency_us": 12913.01925925926 00:35:09.308 } 00:35:09.308 ], 00:35:09.308 "core_count": 1 00:35:09.308 } 00:35:09.308 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:09.308 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:09.308 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:09.308 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:09.308 | select(.opcode=="crc32c") 00:35:09.308 | "\(.module_name) \(.executed)"' 00:35:09.308 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:09.566 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:09.567 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:09.567 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:09.567 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:09.567 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 886388 00:35:09.567 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 886388 ']' 00:35:09.567 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 886388 00:35:09.567 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:09.567 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:09.567 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 886388 00:35:09.567 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:09.567 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:09.567 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 886388' 00:35:09.567 killing process with pid 886388 00:35:09.567 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 886388 00:35:09.567 Received shutdown signal, test time was about 2.000000 seconds 00:35:09.567 00:35:09.567 Latency(us) 00:35:09.567 [2024-11-18T07:09:02.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:09.567 [2024-11-18T07:09:02.655Z] =================================================================================================================== 00:35:09.567 [2024-11-18T07:09:02.655Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:09.567 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 886388 00:35:09.825 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:09.825 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:09.825 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:09.825 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:09.825 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:09.825 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:09.825 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:09.825 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=886910 00:35:09.825 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:09.825 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 886910 /var/tmp/bperf.sock 00:35:09.825 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 886910 ']' 00:35:09.825 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:09.825 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:09.825 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:09.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:09.825 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:09.825 08:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:09.825 [2024-11-18 08:09:02.897825] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:09.825 [2024-11-18 08:09:02.897937] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid886910 ] 00:35:10.082 [2024-11-18 08:09:02.967545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:10.082 [2024-11-18 08:09:03.015683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:10.082 08:09:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:10.082 08:09:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:10.082 08:09:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:10.082 08:09:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:10.082 08:09:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:10.648 08:09:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:10.648 08:09:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:10.906 nvme0n1 00:35:10.906 08:09:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:10.906 08:09:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:11.164 Running I/O for 2 seconds... 00:35:13.033 18653.00 IOPS, 72.86 MiB/s [2024-11-18T07:09:06.121Z] 18602.50 IOPS, 72.67 MiB/s 00:35:13.033 Latency(us) 00:35:13.033 [2024-11-18T07:09:06.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.033 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:13.033 nvme0n1 : 2.01 18605.19 72.68 0.00 0.00 6865.43 5364.24 13301.38 00:35:13.033 [2024-11-18T07:09:06.121Z] =================================================================================================================== 00:35:13.033 [2024-11-18T07:09:06.121Z] Total : 18605.19 72.68 0.00 0.00 6865.43 5364.24 13301.38 00:35:13.033 { 00:35:13.033 "results": [ 00:35:13.033 { 00:35:13.033 "job": "nvme0n1", 00:35:13.033 "core_mask": "0x2", 00:35:13.033 "workload": "randwrite", 00:35:13.033 "status": "finished", 00:35:13.033 "queue_depth": 128, 00:35:13.033 "io_size": 4096, 00:35:13.033 "runtime": 2.006591, 00:35:13.033 "iops": 18605.18660753487, 00:35:13.033 "mibps": 72.67651018568309, 00:35:13.033 "io_failed": 0, 00:35:13.033 "io_timeout": 0, 00:35:13.033 "avg_latency_us": 6865.428013385041, 00:35:13.033 "min_latency_us": 5364.242962962963, 00:35:13.033 "max_latency_us": 13301.38074074074 00:35:13.033 } 00:35:13.033 ], 00:35:13.033 "core_count": 1 00:35:13.033 } 00:35:13.033 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:13.033 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:13.033 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:13.033 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:13.033 | select(.opcode=="crc32c") 00:35:13.033 | "\(.module_name) \(.executed)"' 00:35:13.033 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:13.291 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:13.291 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:13.291 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:13.291 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:13.291 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 886910 00:35:13.291 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 886910 ']' 00:35:13.291 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 886910 00:35:13.291 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:13.292 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:13.292 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 886910 00:35:13.292 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:13.292 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:13.292 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 886910' 00:35:13.292 killing process with pid 886910 00:35:13.292 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 886910 00:35:13.292 Received shutdown signal, test time was about 2.000000 seconds 00:35:13.292 00:35:13.292 Latency(us) 00:35:13.292 [2024-11-18T07:09:06.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.292 [2024-11-18T07:09:06.380Z] =================================================================================================================== 00:35:13.292 [2024-11-18T07:09:06.380Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:13.292 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 886910 00:35:13.550 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:13.551 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:13.551 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:13.551 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:13.551 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:13.551 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:13.551 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:13.551 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=887320 00:35:13.551 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 887320 /var/tmp/bperf.sock 00:35:13.551 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:13.551 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 887320 ']' 00:35:13.551 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:13.551 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:13.551 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:13.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:13.551 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:13.551 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:13.551 [2024-11-18 08:09:06.605696] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:13.551 [2024-11-18 08:09:06.605790] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid887320 ] 00:35:13.551 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:13.551 Zero copy mechanism will not be used. 00:35:13.810 [2024-11-18 08:09:06.681208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.810 [2024-11-18 08:09:06.730960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:13.810 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:13.810 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:13.810 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:13.810 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:13.810 08:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:14.378 08:09:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:14.378 08:09:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:14.636 nvme0n1 00:35:14.636 08:09:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:14.636 08:09:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:14.636 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:14.636 Zero copy mechanism will not be used. 00:35:14.636 Running I/O for 2 seconds... 00:35:16.943 5439.00 IOPS, 679.88 MiB/s [2024-11-18T07:09:10.031Z] 5438.00 IOPS, 679.75 MiB/s 00:35:16.943 Latency(us) 00:35:16.943 [2024-11-18T07:09:10.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:16.943 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:16.943 nvme0n1 : 2.00 5436.93 679.62 0.00 0.00 2936.13 2014.63 11359.57 00:35:16.943 [2024-11-18T07:09:10.031Z] =================================================================================================================== 00:35:16.943 [2024-11-18T07:09:10.031Z] Total : 5436.93 679.62 0.00 0.00 2936.13 2014.63 11359.57 00:35:16.943 { 00:35:16.943 "results": [ 00:35:16.943 { 00:35:16.943 "job": "nvme0n1", 00:35:16.943 "core_mask": "0x2", 00:35:16.943 "workload": "randwrite", 00:35:16.943 "status": "finished", 00:35:16.943 "queue_depth": 16, 00:35:16.943 "io_size": 131072, 00:35:16.943 "runtime": 2.004255, 00:35:16.943 "iops": 5436.932925201633, 00:35:16.943 "mibps": 679.6166156502042, 00:35:16.943 "io_failed": 0, 00:35:16.943 "io_timeout": 0, 00:35:16.943 "avg_latency_us": 2936.1333722159343, 00:35:16.943 "min_latency_us": 2014.6251851851853, 00:35:16.943 "max_latency_us": 11359.573333333334 00:35:16.943 } 00:35:16.943 ], 00:35:16.943 "core_count": 1 00:35:16.943 } 00:35:16.943 08:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:16.943 08:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:16.943 08:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:16.943 08:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:16.943 | select(.opcode=="crc32c") 00:35:16.943 | "\(.module_name) \(.executed)"' 00:35:16.943 08:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:16.943 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:16.943 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:16.943 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:16.943 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:16.943 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 887320 00:35:16.943 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 887320 ']' 00:35:16.943 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 887320 00:35:16.943 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:16.943 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:16.943 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 887320 00:35:17.202 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:17.202 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:17.202 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 887320' 00:35:17.202 killing process with pid 887320 00:35:17.202 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 887320 00:35:17.202 Received shutdown signal, test time was about 2.000000 seconds 00:35:17.202 00:35:17.202 Latency(us) 00:35:17.202 [2024-11-18T07:09:10.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.202 [2024-11-18T07:09:10.290Z] =================================================================================================================== 00:35:17.202 [2024-11-18T07:09:10.290Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:17.202 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 887320 00:35:17.202 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 885837 00:35:17.202 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 885837 ']' 00:35:17.202 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 885837 00:35:17.202 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:17.202 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:17.202 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 885837 00:35:17.461 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:17.461 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:17.461 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 885837' 00:35:17.461 killing process with pid 885837 00:35:17.461 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 885837 00:35:17.461 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 885837 00:35:17.461 00:35:17.461 real 0m15.645s 00:35:17.461 user 0m31.168s 00:35:17.461 sys 0m4.288s 00:35:17.461 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:17.461 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:17.461 ************************************ 00:35:17.461 END TEST nvmf_digest_clean 00:35:17.461 ************************************ 00:35:17.461 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:17.461 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:17.461 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:17.461 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:17.461 ************************************ 00:35:17.461 START TEST nvmf_digest_error 00:35:17.461 ************************************ 00:35:17.461 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:35:17.461 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:17.461 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:17.461 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:17.461 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:17.719 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=888374 00:35:17.719 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:17.719 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 888374 00:35:17.719 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 888374 ']' 00:35:17.719 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:17.719 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:17.719 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:17.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:17.719 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:17.719 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:17.719 [2024-11-18 08:09:10.600576] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:17.719 [2024-11-18 08:09:10.600644] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:17.719 [2024-11-18 08:09:10.670599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.719 [2024-11-18 08:09:10.715060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:17.719 [2024-11-18 08:09:10.715111] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:17.719 [2024-11-18 08:09:10.715124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:17.719 [2024-11-18 08:09:10.715135] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:17.719 [2024-11-18 08:09:10.715145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:17.719 [2024-11-18 08:09:10.715706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:17.978 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:17.978 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:17.978 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:17.978 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:17.978 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:17.978 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:17.978 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:17.978 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.978 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:17.978 [2024-11-18 08:09:10.880536] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:17.978 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.978 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:17.978 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:17.978 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.978 08:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:17.978 null0 00:35:17.978 [2024-11-18 08:09:10.999403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:17.979 [2024-11-18 08:09:11.023696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:17.979 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.979 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:17.979 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:17.979 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:17.979 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:17.979 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:17.979 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=888404 00:35:17.979 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 888404 /var/tmp/bperf.sock 00:35:17.979 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 888404 ']' 00:35:17.979 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:17.979 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:17.979 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:17.979 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:17.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:17.979 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:17.979 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:18.237 [2024-11-18 08:09:11.075540] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:18.237 [2024-11-18 08:09:11.075629] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid888404 ] 00:35:18.237 [2024-11-18 08:09:11.144103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.237 [2024-11-18 08:09:11.192601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:18.237 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:18.237 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:18.237 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:18.237 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:18.495 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:18.495 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.495 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:18.495 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.495 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:18.495 08:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:19.061 nvme0n1 00:35:19.061 08:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:19.061 08:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.061 08:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.061 08:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.061 08:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:19.061 08:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:19.061 Running I/O for 2 seconds... 00:35:19.319 [2024-11-18 08:09:12.160823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.319 [2024-11-18 08:09:12.160895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.319 [2024-11-18 08:09:12.160930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.319 [2024-11-18 08:09:12.177373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.319 [2024-11-18 08:09:12.177407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.319 [2024-11-18 08:09:12.177424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.319 [2024-11-18 08:09:12.193409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.319 [2024-11-18 08:09:12.193439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.319 [2024-11-18 08:09:12.193455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.319 [2024-11-18 08:09:12.204622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.319 [2024-11-18 08:09:12.204653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.319 [2024-11-18 08:09:12.204670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.319 [2024-11-18 08:09:12.218641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.319 [2024-11-18 08:09:12.218674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.319 [2024-11-18 08:09:12.218692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.319 [2024-11-18 08:09:12.234893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.319 [2024-11-18 08:09:12.234924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.319 [2024-11-18 08:09:12.234941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.319 [2024-11-18 08:09:12.246367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.319 [2024-11-18 08:09:12.246397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.319 [2024-11-18 08:09:12.246414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.320 [2024-11-18 08:09:12.259612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.320 [2024-11-18 08:09:12.259658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.320 [2024-11-18 08:09:12.259676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.320 [2024-11-18 08:09:12.274531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.320 [2024-11-18 08:09:12.274578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.320 [2024-11-18 08:09:12.274595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.320 [2024-11-18 08:09:12.286952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.320 [2024-11-18 08:09:12.286982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.320 [2024-11-18 08:09:12.286999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.320 [2024-11-18 08:09:12.301671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.320 [2024-11-18 08:09:12.301720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.320 [2024-11-18 08:09:12.301776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.320 [2024-11-18 08:09:12.316634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.320 [2024-11-18 08:09:12.316667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.320 [2024-11-18 08:09:12.316688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.320 [2024-11-18 08:09:12.328660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.320 [2024-11-18 08:09:12.328693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.320 [2024-11-18 08:09:12.328710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.320 [2024-11-18 08:09:12.341638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.320 [2024-11-18 08:09:12.341667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.320 [2024-11-18 08:09:12.341683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.320 [2024-11-18 08:09:12.355536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.320 [2024-11-18 08:09:12.355569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.320 [2024-11-18 08:09:12.355586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.320 [2024-11-18 08:09:12.367913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.320 [2024-11-18 08:09:12.367946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.320 [2024-11-18 08:09:12.367963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.320 [2024-11-18 08:09:12.379091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.320 [2024-11-18 08:09:12.379121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.320 [2024-11-18 08:09:12.379137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.320 [2024-11-18 08:09:12.394086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.320 [2024-11-18 08:09:12.394117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.320 [2024-11-18 08:09:12.394134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.578 [2024-11-18 08:09:12.408738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.578 [2024-11-18 08:09:12.408793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.578 [2024-11-18 08:09:12.408823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.578 [2024-11-18 08:09:12.421382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.578 [2024-11-18 08:09:12.421413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.578 [2024-11-18 08:09:12.421429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.578 [2024-11-18 08:09:12.437785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.578 [2024-11-18 08:09:12.437845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.578 [2024-11-18 08:09:12.437863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.578 [2024-11-18 08:09:12.453594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.578 [2024-11-18 08:09:12.453625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.578 [2024-11-18 08:09:12.453642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.578 [2024-11-18 08:09:12.468826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.578 [2024-11-18 08:09:12.468858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.578 [2024-11-18 08:09:12.468876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.578 [2024-11-18 08:09:12.480267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.578 [2024-11-18 08:09:12.480298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.578 [2024-11-18 08:09:12.480314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.578 [2024-11-18 08:09:12.495892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.578 [2024-11-18 08:09:12.495963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.578 [2024-11-18 08:09:12.495980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.578 [2024-11-18 08:09:12.507639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.578 [2024-11-18 08:09:12.507669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.578 [2024-11-18 08:09:12.507687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.578 [2024-11-18 08:09:12.522124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.578 [2024-11-18 08:09:12.522154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.578 [2024-11-18 08:09:12.522169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.578 [2024-11-18 08:09:12.537291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.578 [2024-11-18 08:09:12.537320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.578 [2024-11-18 08:09:12.537336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.578 [2024-11-18 08:09:12.551661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.578 [2024-11-18 08:09:12.551691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.579 [2024-11-18 08:09:12.551708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.579 [2024-11-18 08:09:12.563236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.579 [2024-11-18 08:09:12.563269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.579 [2024-11-18 08:09:12.563286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.579 [2024-11-18 08:09:12.577223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.579 [2024-11-18 08:09:12.577254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.579 [2024-11-18 08:09:12.577284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.579 [2024-11-18 08:09:12.591479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.579 [2024-11-18 08:09:12.591540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.579 [2024-11-18 08:09:12.591558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.579 [2024-11-18 08:09:12.601989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.579 [2024-11-18 08:09:12.602018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.579 [2024-11-18 08:09:12.602034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.579 [2024-11-18 08:09:12.616736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.579 [2024-11-18 08:09:12.616767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.579 [2024-11-18 08:09:12.616784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.579 [2024-11-18 08:09:12.628234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.579 [2024-11-18 08:09:12.628263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.579 [2024-11-18 08:09:12.628279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.579 [2024-11-18 08:09:12.641712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.579 [2024-11-18 08:09:12.641743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.579 [2024-11-18 08:09:12.641760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.579 [2024-11-18 08:09:12.654630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.579 [2024-11-18 08:09:12.654660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.579 [2024-11-18 08:09:12.654677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.837 [2024-11-18 08:09:12.668729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.837 [2024-11-18 08:09:12.668765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.837 [2024-11-18 08:09:12.668799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.837 [2024-11-18 08:09:12.685562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.837 [2024-11-18 08:09:12.685593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.837 [2024-11-18 08:09:12.685610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.837 [2024-11-18 08:09:12.698681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.837 [2024-11-18 08:09:12.698711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.837 [2024-11-18 08:09:12.698727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.837 [2024-11-18 08:09:12.711709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.837 [2024-11-18 08:09:12.711754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.837 [2024-11-18 08:09:12.711772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.837 [2024-11-18 08:09:12.725078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.837 [2024-11-18 08:09:12.725109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.837 [2024-11-18 08:09:12.725127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.837 [2024-11-18 08:09:12.736644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.837 [2024-11-18 08:09:12.736675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.837 [2024-11-18 08:09:12.736692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.837 [2024-11-18 08:09:12.750003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.837 [2024-11-18 08:09:12.750032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.837 [2024-11-18 08:09:12.750048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.837 [2024-11-18 08:09:12.762919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.837 [2024-11-18 08:09:12.762948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.837 [2024-11-18 08:09:12.762963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.837 [2024-11-18 08:09:12.775432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.837 [2024-11-18 08:09:12.775461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.837 [2024-11-18 08:09:12.775498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.837 [2024-11-18 08:09:12.789978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.837 [2024-11-18 08:09:12.790008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.837 [2024-11-18 08:09:12.790024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.837 [2024-11-18 08:09:12.806331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.837 [2024-11-18 08:09:12.806391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.837 [2024-11-18 08:09:12.806412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.837 [2024-11-18 08:09:12.823419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.837 [2024-11-18 08:09:12.823457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.837 [2024-11-18 08:09:12.823495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.837 [2024-11-18 08:09:12.838859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.837 [2024-11-18 08:09:12.838890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.837 [2024-11-18 08:09:12.838907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.837 [2024-11-18 08:09:12.854942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.838 [2024-11-18 08:09:12.854972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.838 [2024-11-18 08:09:12.854988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.838 [2024-11-18 08:09:12.865683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.838 [2024-11-18 08:09:12.865715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.838 [2024-11-18 08:09:12.865747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.838 [2024-11-18 08:09:12.880616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.838 [2024-11-18 08:09:12.880648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.838 [2024-11-18 08:09:12.880666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.838 [2024-11-18 08:09:12.895620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.838 [2024-11-18 08:09:12.895651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.838 [2024-11-18 08:09:12.895668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.838 [2024-11-18 08:09:12.911888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:19.838 [2024-11-18 08:09:12.911918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.838 [2024-11-18 08:09:12.911940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.096 [2024-11-18 08:09:12.927459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.096 [2024-11-18 08:09:12.927529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.096 [2024-11-18 08:09:12.927551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.096 [2024-11-18 08:09:12.940251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.096 [2024-11-18 08:09:12.940297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.096 [2024-11-18 08:09:12.940315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.096 [2024-11-18 08:09:12.954388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.096 [2024-11-18 08:09:12.954419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.096 [2024-11-18 08:09:12.954436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.096 [2024-11-18 08:09:12.970721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.096 [2024-11-18 08:09:12.970755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.097 [2024-11-18 08:09:12.970774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.097 [2024-11-18 08:09:12.982352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.097 [2024-11-18 08:09:12.982382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.097 [2024-11-18 08:09:12.982398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.097 [2024-11-18 08:09:12.997647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.097 [2024-11-18 08:09:12.997678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.097 [2024-11-18 08:09:12.997694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.097 [2024-11-18 08:09:13.014148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.097 [2024-11-18 08:09:13.014182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.097 [2024-11-18 08:09:13.014213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.097 [2024-11-18 08:09:13.029510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.097 [2024-11-18 08:09:13.029546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.097 [2024-11-18 08:09:13.029564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.097 [2024-11-18 08:09:13.041669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.097 [2024-11-18 08:09:13.041708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.097 [2024-11-18 08:09:13.041729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.097 [2024-11-18 08:09:13.055956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.097 [2024-11-18 08:09:13.055986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.097 [2024-11-18 08:09:13.056003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.097 [2024-11-18 08:09:13.068196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.097 [2024-11-18 08:09:13.068227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.097 [2024-11-18 08:09:13.068244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.097 [2024-11-18 08:09:13.079567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.097 [2024-11-18 08:09:13.079597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.097 [2024-11-18 08:09:13.079614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.097 [2024-11-18 08:09:13.093781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.097 [2024-11-18 08:09:13.093835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.097 [2024-11-18 08:09:13.093861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.097 [2024-11-18 08:09:13.105670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.097 [2024-11-18 08:09:13.105701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.097 [2024-11-18 08:09:13.105718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.097 [2024-11-18 08:09:13.119059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.097 [2024-11-18 08:09:13.119092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.097 [2024-11-18 08:09:13.119109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.097 [2024-11-18 08:09:13.130625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.097 [2024-11-18 08:09:13.130655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.097 [2024-11-18 08:09:13.130671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.097 18269.00 IOPS, 71.36 MiB/s [2024-11-18T07:09:13.185Z] [2024-11-18 08:09:13.144197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.097 [2024-11-18 08:09:13.144227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.097 [2024-11-18 08:09:13.144248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.097 [2024-11-18 08:09:13.159405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.097 [2024-11-18 08:09:13.159436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.097 [2024-11-18 08:09:13.159453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.097 [2024-11-18 08:09:13.174547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.097 [2024-11-18 08:09:13.174589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.097 [2024-11-18 08:09:13.174609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.355 [2024-11-18 08:09:13.186459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.355 [2024-11-18 08:09:13.186511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.355 [2024-11-18 08:09:13.186531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.355 [2024-11-18 08:09:13.202392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.356 [2024-11-18 08:09:13.202423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.356 [2024-11-18 08:09:13.202439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.356 [2024-11-18 08:09:13.217735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.356 [2024-11-18 08:09:13.217769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.356 [2024-11-18 08:09:13.217802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.356 [2024-11-18 08:09:13.230065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.356 [2024-11-18 08:09:13.230094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.356 [2024-11-18 08:09:13.230110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.356 [2024-11-18 08:09:13.245006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.356 [2024-11-18 08:09:13.245052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.356 [2024-11-18 08:09:13.245069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.356 [2024-11-18 08:09:13.257224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.356 [2024-11-18 08:09:13.257253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.356 [2024-11-18 08:09:13.257270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.356 [2024-11-18 08:09:13.271060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.356 [2024-11-18 08:09:13.271110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.356 [2024-11-18 08:09:13.271127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.356 [2024-11-18 08:09:13.284335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.356 [2024-11-18 08:09:13.284380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.356 [2024-11-18 08:09:13.284411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.356 [2024-11-18 08:09:13.297270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.356 [2024-11-18 08:09:13.297301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.356 [2024-11-18 08:09:13.297332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.356 [2024-11-18 08:09:13.309589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.356 [2024-11-18 08:09:13.309619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.356 [2024-11-18 08:09:13.309636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.356 [2024-11-18 08:09:13.324503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.356 [2024-11-18 08:09:13.324533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.356 [2024-11-18 08:09:13.324549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.356 [2024-11-18 08:09:13.336168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.356 [2024-11-18 08:09:13.336199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.356 [2024-11-18 08:09:13.336216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.356 [2024-11-18 08:09:13.350857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.356 [2024-11-18 08:09:13.350888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.356 [2024-11-18 08:09:13.350904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.356 [2024-11-18 08:09:13.366119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.356 [2024-11-18 08:09:13.366150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.356 [2024-11-18 08:09:13.366167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.356 [2024-11-18 08:09:13.380567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.356 [2024-11-18 08:09:13.380601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.356 [2024-11-18 08:09:13.380619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.356 [2024-11-18 08:09:13.392120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.356 [2024-11-18 08:09:13.392150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.356 [2024-11-18 08:09:13.392166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.356 [2024-11-18 08:09:13.407549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.356 [2024-11-18 08:09:13.407578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.356 [2024-11-18 08:09:13.407595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.356 [2024-11-18 08:09:13.421304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.356 [2024-11-18 08:09:13.421333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.356 [2024-11-18 08:09:13.421349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.356 [2024-11-18 08:09:13.437571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.356 [2024-11-18 08:09:13.437601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.356 [2024-11-18 08:09:13.437617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.615 [2024-11-18 08:09:13.454338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.615 [2024-11-18 08:09:13.454370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.615 [2024-11-18 08:09:13.454404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.615 [2024-11-18 08:09:13.468876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.615 [2024-11-18 08:09:13.468905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.615 [2024-11-18 08:09:13.468922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.615 [2024-11-18 08:09:13.482131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.615 [2024-11-18 08:09:13.482161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.615 [2024-11-18 08:09:13.482177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.615 [2024-11-18 08:09:13.494786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.615 [2024-11-18 08:09:13.494831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.615 [2024-11-18 08:09:13.494848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.615 [2024-11-18 08:09:13.505725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.615 [2024-11-18 08:09:13.505756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.615 [2024-11-18 08:09:13.505778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.615 [2024-11-18 08:09:13.520422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.615 [2024-11-18 08:09:13.520451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.615 [2024-11-18 08:09:13.520467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.615 [2024-11-18 08:09:13.536205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.615 [2024-11-18 08:09:13.536249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.615 [2024-11-18 08:09:13.536266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.615 [2024-11-18 08:09:13.549811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.615 [2024-11-18 08:09:13.549858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.615 [2024-11-18 08:09:13.549874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.615 [2024-11-18 08:09:13.561730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.615 [2024-11-18 08:09:13.561760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.615 [2024-11-18 08:09:13.561776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.615 [2024-11-18 08:09:13.576781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.615 [2024-11-18 08:09:13.576831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.615 [2024-11-18 08:09:13.576855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.615 [2024-11-18 08:09:13.590876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.615 [2024-11-18 08:09:13.590977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.615 [2024-11-18 08:09:13.590999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.615 [2024-11-18 08:09:13.603087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.615 [2024-11-18 08:09:13.603116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.615 [2024-11-18 08:09:13.603132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.615 [2024-11-18 08:09:13.620377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.615 [2024-11-18 08:09:13.620407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.615 [2024-11-18 08:09:13.620423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.615 [2024-11-18 08:09:13.636247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.615 [2024-11-18 08:09:13.636282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.615 [2024-11-18 08:09:13.636299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.615 [2024-11-18 08:09:13.653088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.615 [2024-11-18 08:09:13.653117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.615 [2024-11-18 08:09:13.653133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.615 [2024-11-18 08:09:13.668170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.615 [2024-11-18 08:09:13.668248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.615 [2024-11-18 08:09:13.668266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.615 [2024-11-18 08:09:13.680573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.615 [2024-11-18 08:09:13.680606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.615 [2024-11-18 08:09:13.680624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.615 [2024-11-18 08:09:13.695587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.615 [2024-11-18 08:09:13.695617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.615 [2024-11-18 08:09:13.695634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.873 [2024-11-18 08:09:13.712392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.873 [2024-11-18 08:09:13.712421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.873 [2024-11-18 08:09:13.712437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.873 [2024-11-18 08:09:13.722892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.873 [2024-11-18 08:09:13.722921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.873 [2024-11-18 08:09:13.722937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.873 [2024-11-18 08:09:13.738562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.873 [2024-11-18 08:09:13.738595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.873 [2024-11-18 08:09:13.738613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.873 [2024-11-18 08:09:13.752299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.873 [2024-11-18 08:09:13.752328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.873 [2024-11-18 08:09:13.752349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.873 [2024-11-18 08:09:13.765119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.873 [2024-11-18 08:09:13.765149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.873 [2024-11-18 08:09:13.765165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.873 [2024-11-18 08:09:13.777539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.873 [2024-11-18 08:09:13.777623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.873 [2024-11-18 08:09:13.777644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.873 [2024-11-18 08:09:13.790736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.873 [2024-11-18 08:09:13.790850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.873 [2024-11-18 08:09:13.790872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.873 [2024-11-18 08:09:13.803516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.873 [2024-11-18 08:09:13.803547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.874 [2024-11-18 08:09:13.803565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.874 [2024-11-18 08:09:13.817937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.874 [2024-11-18 08:09:13.817967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.874 [2024-11-18 08:09:13.817984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.874 [2024-11-18 08:09:13.832519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.874 [2024-11-18 08:09:13.832585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.874 [2024-11-18 08:09:13.832610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.874 [2024-11-18 08:09:13.845349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.874 [2024-11-18 08:09:13.845379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.874 [2024-11-18 08:09:13.845395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.874 [2024-11-18 08:09:13.859699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.874 [2024-11-18 08:09:13.859745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.874 [2024-11-18 08:09:13.859762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.874 [2024-11-18 08:09:13.875511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.874 [2024-11-18 08:09:13.875546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.874 [2024-11-18 08:09:13.875564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.874 [2024-11-18 08:09:13.887000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.874 [2024-11-18 08:09:13.887029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.874 [2024-11-18 08:09:13.887045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.874 [2024-11-18 08:09:13.900382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.874 [2024-11-18 08:09:13.900412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.874 [2024-11-18 08:09:13.900428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.874 [2024-11-18 08:09:13.916871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.874 [2024-11-18 08:09:13.916903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.874 [2024-11-18 08:09:13.916920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.874 [2024-11-18 08:09:13.931346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.874 [2024-11-18 08:09:13.931391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.874 [2024-11-18 08:09:13.931409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.874 [2024-11-18 08:09:13.942896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.874 [2024-11-18 08:09:13.942926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.874 [2024-11-18 08:09:13.942943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.874 [2024-11-18 08:09:13.957363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:20.874 [2024-11-18 08:09:13.957393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.874 [2024-11-18 08:09:13.957409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.132 [2024-11-18 08:09:13.973401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:21.132 [2024-11-18 08:09:13.973446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.132 [2024-11-18 08:09:13.973512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.132 [2024-11-18 08:09:13.984881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:21.132 [2024-11-18 08:09:13.984910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.132 [2024-11-18 08:09:13.984926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.132 [2024-11-18 08:09:13.998848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:21.132 [2024-11-18 08:09:13.998894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.132 [2024-11-18 08:09:13.998910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.132 [2024-11-18 08:09:14.013110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:21.132 [2024-11-18 08:09:14.013139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.132 [2024-11-18 08:09:14.013154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.132 [2024-11-18 08:09:14.024755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:21.132 [2024-11-18 08:09:14.024799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.132 [2024-11-18 08:09:14.024816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.132 [2024-11-18 08:09:14.038253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:21.133 [2024-11-18 08:09:14.038283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-11-18 08:09:14.038300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.133 [2024-11-18 08:09:14.051125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:21.133 [2024-11-18 08:09:14.051188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-11-18 08:09:14.051206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.133 [2024-11-18 08:09:14.067690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:21.133 [2024-11-18 08:09:14.067721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-11-18 08:09:14.067738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.133 [2024-11-18 08:09:14.080027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:21.133 [2024-11-18 08:09:14.080056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-11-18 08:09:14.080072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.133 [2024-11-18 08:09:14.093798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:21.133 [2024-11-18 08:09:14.093828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-11-18 08:09:14.093845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.133 [2024-11-18 08:09:14.106216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:21.133 [2024-11-18 08:09:14.106246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-11-18 08:09:14.106267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.133 [2024-11-18 08:09:14.119931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:21.133 [2024-11-18 08:09:14.119961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-11-18 08:09:14.119978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.133 [2024-11-18 08:09:14.134399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:21.133 [2024-11-18 08:09:14.134441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-11-18 08:09:14.134458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.133 18298.00 IOPS, 71.48 MiB/s [2024-11-18T07:09:14.221Z] [2024-11-18 08:09:14.146273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59e40) 00:35:21.133 [2024-11-18 08:09:14.146317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-11-18 08:09:14.146333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.133 00:35:21.133 Latency(us) 00:35:21.133 [2024-11-18T07:09:14.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.133 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:21.133 nvme0n1 : 2.01 18309.13 71.52 0.00 0.00 6981.29 3543.80 21554.06 00:35:21.133 [2024-11-18T07:09:14.221Z] =================================================================================================================== 00:35:21.133 [2024-11-18T07:09:14.221Z] Total : 18309.13 71.52 0.00 0.00 6981.29 3543.80 21554.06 00:35:21.133 { 00:35:21.133 "results": [ 00:35:21.133 { 00:35:21.133 "job": "nvme0n1", 00:35:21.133 "core_mask": "0x2", 00:35:21.133 "workload": "randread", 00:35:21.133 "status": "finished", 00:35:21.133 "queue_depth": 128, 00:35:21.133 "io_size": 4096, 00:35:21.133 "runtime": 2.005775, 00:35:21.133 "iops": 18309.132380252024, 00:35:21.133 "mibps": 71.52004836035947, 00:35:21.133 "io_failed": 0, 00:35:21.133 "io_timeout": 0, 00:35:21.133 "avg_latency_us": 6981.290761677699, 00:35:21.133 "min_latency_us": 3543.7985185185184, 00:35:21.133 "max_latency_us": 21554.062222222223 00:35:21.133 } 00:35:21.133 ], 00:35:21.133 "core_count": 1 00:35:21.133 } 00:35:21.133 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:21.133 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:21.133 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:21.133 | .driver_specific 00:35:21.133 | .nvme_error 00:35:21.133 | .status_code 00:35:21.133 | .command_transient_transport_error' 00:35:21.133 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:21.391 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:35:21.391 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 888404 00:35:21.391 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 888404 ']' 00:35:21.391 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 888404 00:35:21.391 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:21.391 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:21.391 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 888404 00:35:21.391 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:21.391 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:21.391 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 888404' 00:35:21.391 killing process with pid 888404 00:35:21.391 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 888404 00:35:21.391 Received shutdown signal, test time was about 2.000000 seconds 00:35:21.391 00:35:21.391 Latency(us) 00:35:21.391 [2024-11-18T07:09:14.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.391 [2024-11-18T07:09:14.479Z] =================================================================================================================== 00:35:21.391 [2024-11-18T07:09:14.479Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:21.391 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 888404 00:35:21.650 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:21.650 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:21.650 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:21.650 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:21.650 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:21.650 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=888811 00:35:21.650 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:21.650 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 888811 /var/tmp/bperf.sock 00:35:21.650 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 888811 ']' 00:35:21.650 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:21.650 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:21.650 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:21.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:21.650 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:21.650 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:21.650 [2024-11-18 08:09:14.721883] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:21.650 [2024-11-18 08:09:14.721990] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid888811 ] 00:35:21.650 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:21.650 Zero copy mechanism will not be used. 00:35:21.908 [2024-11-18 08:09:14.792238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.908 [2024-11-18 08:09:14.837688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:21.908 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:21.908 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:21.908 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:21.908 08:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:22.166 08:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:22.166 08:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.166 08:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:22.166 08:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.166 08:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:22.166 08:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:22.733 nvme0n1 00:35:22.733 08:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:22.733 08:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.733 08:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:22.733 08:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.733 08:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:22.733 08:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:22.733 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:22.733 Zero copy mechanism will not be used. 00:35:22.733 Running I/O for 2 seconds... 00:35:22.733 [2024-11-18 08:09:15.766384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.733 [2024-11-18 08:09:15.766462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.733 [2024-11-18 08:09:15.766506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:22.733 [2024-11-18 08:09:15.772204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.733 [2024-11-18 08:09:15.772241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.733 [2024-11-18 08:09:15.772259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:22.733 [2024-11-18 08:09:15.779520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.733 [2024-11-18 08:09:15.779553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.733 [2024-11-18 08:09:15.779573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:22.733 [2024-11-18 08:09:15.786315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.734 [2024-11-18 08:09:15.786348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.734 [2024-11-18 08:09:15.786366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:22.734 [2024-11-18 08:09:15.791882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.734 [2024-11-18 08:09:15.791930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.734 [2024-11-18 08:09:15.791948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:22.734 [2024-11-18 08:09:15.797544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.734 [2024-11-18 08:09:15.797576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.734 [2024-11-18 08:09:15.797594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:22.734 [2024-11-18 08:09:15.803229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.734 [2024-11-18 08:09:15.803276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.734 [2024-11-18 08:09:15.803293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:22.734 [2024-11-18 08:09:15.808085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.734 [2024-11-18 08:09:15.808132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.734 [2024-11-18 08:09:15.808150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:22.734 [2024-11-18 08:09:15.813010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.734 [2024-11-18 08:09:15.813043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.734 [2024-11-18 08:09:15.813061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:22.734 [2024-11-18 08:09:15.818296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.734 [2024-11-18 08:09:15.818329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.734 [2024-11-18 08:09:15.818347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:22.995 [2024-11-18 08:09:15.823771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.995 [2024-11-18 08:09:15.823804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.995 [2024-11-18 08:09:15.823823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:22.995 [2024-11-18 08:09:15.829457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.995 [2024-11-18 08:09:15.829499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.995 [2024-11-18 08:09:15.829519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:22.995 [2024-11-18 08:09:15.833084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.995 [2024-11-18 08:09:15.833115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.995 [2024-11-18 08:09:15.833140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:22.995 [2024-11-18 08:09:15.839212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.995 [2024-11-18 08:09:15.839244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.995 [2024-11-18 08:09:15.839262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:22.995 [2024-11-18 08:09:15.845419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.995 [2024-11-18 08:09:15.845452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.995 [2024-11-18 08:09:15.845470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:22.995 [2024-11-18 08:09:15.852234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.995 [2024-11-18 08:09:15.852276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.995 [2024-11-18 08:09:15.852294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:22.995 [2024-11-18 08:09:15.858631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.995 [2024-11-18 08:09:15.858664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.995 [2024-11-18 08:09:15.858682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:22.995 [2024-11-18 08:09:15.864809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.995 [2024-11-18 08:09:15.864843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.995 [2024-11-18 08:09:15.864861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:22.995 [2024-11-18 08:09:15.870680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.995 [2024-11-18 08:09:15.870713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.995 [2024-11-18 08:09:15.870731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.876857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.876890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.876908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.883617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.883651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.883669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.890190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.890244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.890262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.894971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.895020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.895037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.899208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.899240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.899258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.904515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.904547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.904565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.910611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.910643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.910661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.918421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.918453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.918486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.923014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.923047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.923064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.929059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.929090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.929107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.935077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.935118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.935135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.939605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.939637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.939655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.944086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.944118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.944136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.948675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.948706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.948723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.953760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.953793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.953810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.958893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.958924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.958941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.963591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.963632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.963649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.967943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.967974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.967991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.973028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.973075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.973092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.978578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.978613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.978638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.983993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.984028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.984046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.990304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.990337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.990355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:15.996270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:15.996304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:15.996321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:16.001602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:16.001649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:16.001667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:16.006614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:16.006646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:16.006664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:16.011232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:16.011263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:16.011280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:22.996 [2024-11-18 08:09:16.015774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.996 [2024-11-18 08:09:16.015805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.996 [2024-11-18 08:09:16.015822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:22.997 [2024-11-18 08:09:16.020417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.997 [2024-11-18 08:09:16.020448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.997 [2024-11-18 08:09:16.020466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:22.997 [2024-11-18 08:09:16.025057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.997 [2024-11-18 08:09:16.025089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.997 [2024-11-18 08:09:16.025107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:22.997 [2024-11-18 08:09:16.029802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.997 [2024-11-18 08:09:16.029833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.997 [2024-11-18 08:09:16.029850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:22.997 [2024-11-18 08:09:16.035265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.997 [2024-11-18 08:09:16.035297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.997 [2024-11-18 08:09:16.035314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:22.997 [2024-11-18 08:09:16.042183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.997 [2024-11-18 08:09:16.042215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.997 [2024-11-18 08:09:16.042233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:22.997 [2024-11-18 08:09:16.049360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.997 [2024-11-18 08:09:16.049392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.997 [2024-11-18 08:09:16.049409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:22.997 [2024-11-18 08:09:16.055446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.997 [2024-11-18 08:09:16.055504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.997 [2024-11-18 08:09:16.055525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:22.997 [2024-11-18 08:09:16.061356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.997 [2024-11-18 08:09:16.061388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.997 [2024-11-18 08:09:16.061406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:22.997 [2024-11-18 08:09:16.067309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.997 [2024-11-18 08:09:16.067343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.997 [2024-11-18 08:09:16.067360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:22.997 [2024-11-18 08:09:16.072646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.997 [2024-11-18 08:09:16.072679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.997 [2024-11-18 08:09:16.072703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:22.997 [2024-11-18 08:09:16.076095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.997 [2024-11-18 08:09:16.076127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.997 [2024-11-18 08:09:16.076144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:22.997 [2024-11-18 08:09:16.082482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:22.997 [2024-11-18 08:09:16.082522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.997 [2024-11-18 08:09:16.082540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.259 [2024-11-18 08:09:16.088114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.259 [2024-11-18 08:09:16.088146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.259 [2024-11-18 08:09:16.088164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.259 [2024-11-18 08:09:16.093565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.259 [2024-11-18 08:09:16.093598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.259 [2024-11-18 08:09:16.093616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.259 [2024-11-18 08:09:16.099745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.259 [2024-11-18 08:09:16.099778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.259 [2024-11-18 08:09:16.099796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.259 [2024-11-18 08:09:16.105700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.259 [2024-11-18 08:09:16.105733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.259 [2024-11-18 08:09:16.105751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.259 [2024-11-18 08:09:16.111395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.259 [2024-11-18 08:09:16.111425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.259 [2024-11-18 08:09:16.111442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.259 [2024-11-18 08:09:16.117219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.259 [2024-11-18 08:09:16.117250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.259 [2024-11-18 08:09:16.117267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.259 [2024-11-18 08:09:16.123085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.259 [2024-11-18 08:09:16.123123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.259 [2024-11-18 08:09:16.123155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.259 [2024-11-18 08:09:16.128727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.259 [2024-11-18 08:09:16.128776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.259 [2024-11-18 08:09:16.128793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.259 [2024-11-18 08:09:16.135765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.259 [2024-11-18 08:09:16.135818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.259 [2024-11-18 08:09:16.135836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.259 [2024-11-18 08:09:16.141682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.259 [2024-11-18 08:09:16.141715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.259 [2024-11-18 08:09:16.141733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.259 [2024-11-18 08:09:16.147313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.259 [2024-11-18 08:09:16.147360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.259 [2024-11-18 08:09:16.147378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.259 [2024-11-18 08:09:16.152347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.259 [2024-11-18 08:09:16.152378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.259 [2024-11-18 08:09:16.152397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.259 [2024-11-18 08:09:16.157112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.259 [2024-11-18 08:09:16.157145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.259 [2024-11-18 08:09:16.157163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.259 [2024-11-18 08:09:16.162047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.259 [2024-11-18 08:09:16.162093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.259 [2024-11-18 08:09:16.162112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.259 [2024-11-18 08:09:16.167266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.259 [2024-11-18 08:09:16.167298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.167328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.171522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.171554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.171571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.176052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.176083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.176100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.180593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.180624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.180642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.185586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.185618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.185636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.190980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.191012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.191029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.196338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.196370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.196388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.201341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.201373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.201390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.207223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.207255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.207272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.211376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.211407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.211432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.217462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.217518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.217536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.224978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.225010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.225044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.231486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.231546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.231564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.239178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.239210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.239228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.246981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.247012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.247029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.254694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.254760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.254778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.262589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.262638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.262656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.268461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.268503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.268524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.274133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.274172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.274191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.280322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.280355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.280373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.286730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.286762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.286781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.292389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.292421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.292439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.297903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.297935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.297953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.303826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.303862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.303881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.310298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.310345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.310362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.317314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.317346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.317364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.324678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.260 [2024-11-18 08:09:16.324711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.260 [2024-11-18 08:09:16.324728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.260 [2024-11-18 08:09:16.331623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.261 [2024-11-18 08:09:16.331656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.261 [2024-11-18 08:09:16.331674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.261 [2024-11-18 08:09:16.336427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.261 [2024-11-18 08:09:16.336460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.261 [2024-11-18 08:09:16.336478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.261 [2024-11-18 08:09:16.342724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.261 [2024-11-18 08:09:16.342756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.261 [2024-11-18 08:09:16.342774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.522 [2024-11-18 08:09:16.349932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.522 [2024-11-18 08:09:16.349965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.522 [2024-11-18 08:09:16.350002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.522 [2024-11-18 08:09:16.356331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.522 [2024-11-18 08:09:16.356364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.522 [2024-11-18 08:09:16.356382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.522 [2024-11-18 08:09:16.363017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.522 [2024-11-18 08:09:16.363050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.522 [2024-11-18 08:09:16.363068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.522 [2024-11-18 08:09:16.368587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.522 [2024-11-18 08:09:16.368619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.522 [2024-11-18 08:09:16.368637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.522 [2024-11-18 08:09:16.373916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.522 [2024-11-18 08:09:16.373964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.522 [2024-11-18 08:09:16.373981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.522 [2024-11-18 08:09:16.378726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.522 [2024-11-18 08:09:16.378759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.522 [2024-11-18 08:09:16.378783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.522 [2024-11-18 08:09:16.384372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.522 [2024-11-18 08:09:16.384404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.522 [2024-11-18 08:09:16.384421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.389745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.389776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.389809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.394975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.395022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.395040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.400865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.400912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.400931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.407408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.407441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.407459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.411586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.411617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.411650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.417223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.417255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.417288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.424707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.424740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.424759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.430386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.430437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.430455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.436874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.436922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.436940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.442995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.443027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.443045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.448888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.448935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.448953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.454332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.454364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.454382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.460569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.460602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.460620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.466565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.466598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.466616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.469972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.470002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.470019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.476179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.476225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.476249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.482300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.482330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.482346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.487786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.487817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.487849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.493090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.493122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.493141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.499314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.499347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.499365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.507035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.507081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.507099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.513139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.513172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.513204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.521280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.521310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.521326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.529023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.529054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.529072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.536170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.536209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.536228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.541774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.541806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.523 [2024-11-18 08:09:16.541824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.523 [2024-11-18 08:09:16.546507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.523 [2024-11-18 08:09:16.546539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.524 [2024-11-18 08:09:16.546557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.524 [2024-11-18 08:09:16.549654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.524 [2024-11-18 08:09:16.549701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.524 [2024-11-18 08:09:16.549719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.524 [2024-11-18 08:09:16.555392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.524 [2024-11-18 08:09:16.555423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.524 [2024-11-18 08:09:16.555441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.524 [2024-11-18 08:09:16.560157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.524 [2024-11-18 08:09:16.560187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.524 [2024-11-18 08:09:16.560204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.524 [2024-11-18 08:09:16.565372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.524 [2024-11-18 08:09:16.565418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.524 [2024-11-18 08:09:16.565436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.524 [2024-11-18 08:09:16.570561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.524 [2024-11-18 08:09:16.570592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.524 [2024-11-18 08:09:16.570609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.524 [2024-11-18 08:09:16.576717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.524 [2024-11-18 08:09:16.576750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.524 [2024-11-18 08:09:16.576768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.524 [2024-11-18 08:09:16.583634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.524 [2024-11-18 08:09:16.583666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.524 [2024-11-18 08:09:16.583684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.524 [2024-11-18 08:09:16.590896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.524 [2024-11-18 08:09:16.590943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.524 [2024-11-18 08:09:16.590962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.524 [2024-11-18 08:09:16.599274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.524 [2024-11-18 08:09:16.599307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.524 [2024-11-18 08:09:16.599339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.524 [2024-11-18 08:09:16.605818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.524 [2024-11-18 08:09:16.605850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.524 [2024-11-18 08:09:16.605883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.611540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.611572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.611590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.617101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.617146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.617163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.622937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.622969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.622986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.628785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.628817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.628834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.634427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.634471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.634514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.640237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.640270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.640304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.645631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.645663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.645681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.649484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.649537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.649555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.656340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.656384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.656401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.662181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.662232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.662249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.667986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.668019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.668037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.672446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.672477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.672503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.677218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.677250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.677284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.681698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.681751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.681768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.686584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.686615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.686633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.692434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.692480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.692505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.700029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.700062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.700094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.706587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.706619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.706637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.713478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.713531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.713551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.719577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.719609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.719627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.724575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.724606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.724624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.729178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.729210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.729246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.733763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.733794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.733812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.786 [2024-11-18 08:09:16.738378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.786 [2024-11-18 08:09:16.738409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.786 [2024-11-18 08:09:16.738426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.787 [2024-11-18 08:09:16.743974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.787 [2024-11-18 08:09:16.744006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.787 [2024-11-18 08:09:16.744024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.787 [2024-11-18 08:09:16.749181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.787 [2024-11-18 08:09:16.749214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.787 [2024-11-18 08:09:16.749232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.787 [2024-11-18 08:09:16.755142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.787 [2024-11-18 08:09:16.755176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.787 [2024-11-18 08:09:16.755195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.787 5366.00 IOPS, 670.75 MiB/s [2024-11-18T07:09:16.875Z] [2024-11-18 08:09:16.759444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.787 [2024-11-18 08:09:16.759476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.787 [2024-11-18 08:09:16.759501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.787 [2024-11-18 08:09:16.764960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.787 [2024-11-18 08:09:16.764994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.787 [2024-11-18 08:09:16.765027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.787 [2024-11-18 08:09:16.772293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.787 [2024-11-18 08:09:16.772325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.787 [2024-11-18 08:09:16.772344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.787 [2024-11-18 08:09:16.779943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.787 [2024-11-18 08:09:16.780001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.787 [2024-11-18 08:09:16.780034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.787 [2024-11-18 08:09:16.787572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.787 [2024-11-18 08:09:16.787621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.787 [2024-11-18 08:09:16.787640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.787 [2024-11-18 08:09:16.795073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.787 [2024-11-18 08:09:16.795105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.787 [2024-11-18 08:09:16.795122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.787 [2024-11-18 08:09:16.802665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.787 [2024-11-18 08:09:16.802711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.787 [2024-11-18 08:09:16.802729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.787 [2024-11-18 08:09:16.810237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.787 [2024-11-18 08:09:16.810285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.787 [2024-11-18 08:09:16.810302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.787 [2024-11-18 08:09:16.818090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.787 [2024-11-18 08:09:16.818120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.787 [2024-11-18 08:09:16.818137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.787 [2024-11-18 08:09:16.825376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.787 [2024-11-18 08:09:16.825409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.787 [2024-11-18 08:09:16.825427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.787 [2024-11-18 08:09:16.833188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.787 [2024-11-18 08:09:16.833235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.787 [2024-11-18 08:09:16.833253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.787 [2024-11-18 08:09:16.840751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.787 [2024-11-18 08:09:16.840799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.787 [2024-11-18 08:09:16.840817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:23.787 [2024-11-18 08:09:16.848377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.787 [2024-11-18 08:09:16.848410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.787 [2024-11-18 08:09:16.848428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:23.787 [2024-11-18 08:09:16.855827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.787 [2024-11-18 08:09:16.855860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.787 [2024-11-18 08:09:16.855879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:23.787 [2024-11-18 08:09:16.860010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.787 [2024-11-18 08:09:16.860042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.787 [2024-11-18 08:09:16.860060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:23.787 [2024-11-18 08:09:16.867470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:23.787 [2024-11-18 08:09:16.867526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.787 [2024-11-18 08:09:16.867545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.048 [2024-11-18 08:09:16.875283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.048 [2024-11-18 08:09:16.875332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.048 [2024-11-18 08:09:16.875350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.048 [2024-11-18 08:09:16.881283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.048 [2024-11-18 08:09:16.881315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.048 [2024-11-18 08:09:16.881348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.048 [2024-11-18 08:09:16.888282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.048 [2024-11-18 08:09:16.888329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:16.888347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:16.893643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:16.893676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:16.893694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:16.898889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:16.898921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:16.898945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:16.903846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:16.903878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:16.903911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:16.909049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:16.909081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:16.909099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:16.915549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:16.915581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:16.915599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:16.921289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:16.921322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:16.921341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:16.927124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:16.927157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:16.927175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:16.933082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:16.933113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:16.933145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:16.938559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:16.938592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:16.938609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:16.943912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:16.943944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:16.943962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:16.947533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:16.947570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:16.947588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:16.952261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:16.952294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:16.952328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:16.957263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:16.957310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:16.957329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:16.962559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:16.962591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:16.962609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:16.968379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:16.968411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:16.968442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:16.974384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:16.974417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:16.974435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:16.980120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:16.980153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:16.980171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:16.986167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:16.986201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:16.986219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:16.991240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:16.991288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:16.991306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:16.996915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:16.996947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:16.996965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:17.003378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:17.003411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:17.003429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:17.008759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:17.008791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:17.008809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:17.013587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:17.013620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:17.013638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:17.019487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:17.019551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:17.019569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:17.025017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.049 [2024-11-18 08:09:17.025049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.049 [2024-11-18 08:09:17.025067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.049 [2024-11-18 08:09:17.030298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.030329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.030361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.050 [2024-11-18 08:09:17.035372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.035404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.035422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.050 [2024-11-18 08:09:17.039942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.039988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.040006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.050 [2024-11-18 08:09:17.045468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.045509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.045529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.050 [2024-11-18 08:09:17.050205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.050237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.050255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.050 [2024-11-18 08:09:17.054972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.055004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.055022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.050 [2024-11-18 08:09:17.059408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.059438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.059455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.050 [2024-11-18 08:09:17.062612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.062642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.062659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.050 [2024-11-18 08:09:17.066707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.066739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.066757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.050 [2024-11-18 08:09:17.070667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.070699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.070717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.050 [2024-11-18 08:09:17.075975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.076019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.076036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.050 [2024-11-18 08:09:17.081482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.081521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.081538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.050 [2024-11-18 08:09:17.087284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.087316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.087335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.050 [2024-11-18 08:09:17.092549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.092582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.092599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.050 [2024-11-18 08:09:17.097791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.097839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.097856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.050 [2024-11-18 08:09:17.102566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.102598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.102615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.050 [2024-11-18 08:09:17.107202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.107233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.107265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.050 [2024-11-18 08:09:17.111759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.111790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.111808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.050 [2024-11-18 08:09:17.116263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.116295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.116312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.050 [2024-11-18 08:09:17.120686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.120716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.120739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.050 [2024-11-18 08:09:17.125527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.125564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.125582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.050 [2024-11-18 08:09:17.131338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.050 [2024-11-18 08:09:17.131369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.050 [2024-11-18 08:09:17.131387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.138937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.138969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.138987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.145111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.145144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.145163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.150993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.151025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.151043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.156722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.156754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.156772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.162573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.162606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.162624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.170016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.170048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.170066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.177704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.177742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.177760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.185365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.185397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.185414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.192956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.192989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.193008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.199221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.199252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.199270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.204471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.204510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.204530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.208976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.209007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.209024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.213546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.213578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.213595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.218084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.218115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.218133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.222527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.222558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.222575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.227049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.227080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.227098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.231527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.231558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.231576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.236129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.236160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.236177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.240851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.240882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.240899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.245556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.245587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.245604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.250195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.250226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.250257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.254721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.254752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.254769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.259260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.259290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.311 [2024-11-18 08:09:17.259307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.311 [2024-11-18 08:09:17.264346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.311 [2024-11-18 08:09:17.264378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.264401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.269150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.269181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.269198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.271992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.272022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.272039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.276351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.276383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.276401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.281563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.281594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.281612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.286897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.286929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.286947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.293908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.293941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.293960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.301200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.301233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.301251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.306766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.306799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.306817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.312308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.312345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.312363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.316893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.316924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.316956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.321824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.321871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.321888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.326734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.326774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.326791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.331262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.331293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.331312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.335726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.335756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.335789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.341089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.341121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.341139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.347711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.347742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.347760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.355047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.355079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.355119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.361562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.361610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.361628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.369227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.369274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.369292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.375105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.375137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.375156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.380234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.380281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.380298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.385090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.385121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.385140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.389817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.389849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.389867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.312 [2024-11-18 08:09:17.394681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.312 [2024-11-18 08:09:17.394712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.312 [2024-11-18 08:09:17.394730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.573 [2024-11-18 08:09:17.399595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.573 [2024-11-18 08:09:17.399626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.573 [2024-11-18 08:09:17.399644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.573 [2024-11-18 08:09:17.404311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.573 [2024-11-18 08:09:17.404349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.573 [2024-11-18 08:09:17.404367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.573 [2024-11-18 08:09:17.409601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.573 [2024-11-18 08:09:17.409632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.573 [2024-11-18 08:09:17.409650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.573 [2024-11-18 08:09:17.414240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.573 [2024-11-18 08:09:17.414271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.573 [2024-11-18 08:09:17.414289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.418759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.418790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.418808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.423478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.423517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.423535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.427891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.427921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.427938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.432333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.432363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.432381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.437688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.437719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.437736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.442621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.442651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.442668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.447238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.447269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.447286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.451755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.451786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.451804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.457069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.457101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.457119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.463451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.463504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.463523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.470918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.470950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.470969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.478145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.478178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.478196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.486194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.486226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.486244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.492628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.492661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.492679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.497461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.497500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.497525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.502151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.502182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.502201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.506762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.506792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.506825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.511501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.511545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.511562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.516035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.516065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.516098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.520778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.520809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.520827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.525611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.525642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.525676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.531162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.531194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.531211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.536503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.536534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.536568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.542407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.542446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.574 [2024-11-18 08:09:17.542465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.574 [2024-11-18 08:09:17.546974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.574 [2024-11-18 08:09:17.547005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.575 [2024-11-18 08:09:17.547023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.575 [2024-11-18 08:09:17.551404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.575 [2024-11-18 08:09:17.551434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.575 [2024-11-18 08:09:17.551452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.575 [2024-11-18 08:09:17.556101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.575 [2024-11-18 08:09:17.556131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.575 [2024-11-18 08:09:17.556149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.575 [2024-11-18 08:09:17.561039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.575 [2024-11-18 08:09:17.561071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.575 [2024-11-18 08:09:17.561089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.575 [2024-11-18 08:09:17.564245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.575 [2024-11-18 08:09:17.564277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.575 [2024-11-18 08:09:17.564294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.575 [2024-11-18 08:09:17.568124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.575 [2024-11-18 08:09:17.568170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.575 [2024-11-18 08:09:17.568188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.575 [2024-11-18 08:09:17.573937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.575 [2024-11-18 08:09:17.573984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.575 [2024-11-18 08:09:17.574002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.575 [2024-11-18 08:09:17.579894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.575 [2024-11-18 08:09:17.579928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.575 [2024-11-18 08:09:17.579953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.575 [2024-11-18 08:09:17.585947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.575 [2024-11-18 08:09:17.585978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.575 [2024-11-18 08:09:17.585997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.575 [2024-11-18 08:09:17.592725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.575 [2024-11-18 08:09:17.592758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.575 [2024-11-18 08:09:17.592777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.575 [2024-11-18 08:09:17.598691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.575 [2024-11-18 08:09:17.598724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.575 [2024-11-18 08:09:17.598742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.575 [2024-11-18 08:09:17.604722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.575 [2024-11-18 08:09:17.604758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.575 [2024-11-18 08:09:17.604795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.575 [2024-11-18 08:09:17.610082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.575 [2024-11-18 08:09:17.610114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.575 [2024-11-18 08:09:17.610133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.575 [2024-11-18 08:09:17.615620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.575 [2024-11-18 08:09:17.615652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.575 [2024-11-18 08:09:17.615671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.575 [2024-11-18 08:09:17.621479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.575 [2024-11-18 08:09:17.621518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.575 [2024-11-18 08:09:17.621538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.575 [2024-11-18 08:09:17.627306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.575 [2024-11-18 08:09:17.627338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.575 [2024-11-18 08:09:17.627356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.575 [2024-11-18 08:09:17.633362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.575 [2024-11-18 08:09:17.633401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.575 [2024-11-18 08:09:17.633421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.575 [2024-11-18 08:09:17.639111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.575 [2024-11-18 08:09:17.639158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.575 [2024-11-18 08:09:17.639176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.575 [2024-11-18 08:09:17.644887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.575 [2024-11-18 08:09:17.644919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.575 [2024-11-18 08:09:17.644936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.575 [2024-11-18 08:09:17.650705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.575 [2024-11-18 08:09:17.650737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.575 [2024-11-18 08:09:17.650755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.575 [2024-11-18 08:09:17.656646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.575 [2024-11-18 08:09:17.656677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.575 [2024-11-18 08:09:17.656702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.870 [2024-11-18 08:09:17.662828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.870 [2024-11-18 08:09:17.662862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-11-18 08:09:17.662880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.870 [2024-11-18 08:09:17.668750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.870 [2024-11-18 08:09:17.668793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-11-18 08:09:17.668827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.870 [2024-11-18 08:09:17.674857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.870 [2024-11-18 08:09:17.674890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-11-18 08:09:17.674908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.870 [2024-11-18 08:09:17.680702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.870 [2024-11-18 08:09:17.680735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-11-18 08:09:17.680753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.870 [2024-11-18 08:09:17.686826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.871 [2024-11-18 08:09:17.686858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-11-18 08:09:17.686876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.871 [2024-11-18 08:09:17.693389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.871 [2024-11-18 08:09:17.693420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-11-18 08:09:17.693438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.871 [2024-11-18 08:09:17.700565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.871 [2024-11-18 08:09:17.700597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-11-18 08:09:17.700616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.871 [2024-11-18 08:09:17.706168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.871 [2024-11-18 08:09:17.706200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-11-18 08:09:17.706218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.871 [2024-11-18 08:09:17.711995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.871 [2024-11-18 08:09:17.712027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-11-18 08:09:17.712044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.871 [2024-11-18 08:09:17.717688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.871 [2024-11-18 08:09:17.717720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-11-18 08:09:17.717739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.871 [2024-11-18 08:09:17.720399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.871 [2024-11-18 08:09:17.720429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-11-18 08:09:17.720446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.871 [2024-11-18 08:09:17.724944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.871 [2024-11-18 08:09:17.724975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-11-18 08:09:17.724992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.871 [2024-11-18 08:09:17.729183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.871 [2024-11-18 08:09:17.729213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-11-18 08:09:17.729236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.871 [2024-11-18 08:09:17.734527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.871 [2024-11-18 08:09:17.734557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-11-18 08:09:17.734574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.871 [2024-11-18 08:09:17.741070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.871 [2024-11-18 08:09:17.741102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-11-18 08:09:17.741119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:24.871 [2024-11-18 08:09:17.748488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.871 [2024-11-18 08:09:17.748529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-11-18 08:09:17.748547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.871 [2024-11-18 08:09:17.754787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.871 [2024-11-18 08:09:17.754819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-11-18 08:09:17.754837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:24.871 5479.00 IOPS, 684.88 MiB/s [2024-11-18T07:09:17.959Z] [2024-11-18 08:09:17.760559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12766a0) 00:35:24.871 [2024-11-18 08:09:17.760592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-11-18 08:09:17.760625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:24.871 00:35:24.871 Latency(us) 00:35:24.871 [2024-11-18T07:09:17.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:24.871 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:24.871 nvme0n1 : 2.00 5481.31 685.16 0.00 0.00 2914.59 676.60 10631.40 00:35:24.871 [2024-11-18T07:09:17.959Z] =================================================================================================================== 00:35:24.871 [2024-11-18T07:09:17.959Z] Total : 5481.31 685.16 0.00 0.00 2914.59 676.60 10631.40 00:35:24.871 { 00:35:24.871 "results": [ 00:35:24.871 { 00:35:24.871 "job": "nvme0n1", 00:35:24.871 "core_mask": "0x2", 00:35:24.871 "workload": "randread", 00:35:24.871 "status": "finished", 00:35:24.871 "queue_depth": 16, 00:35:24.871 "io_size": 131072, 00:35:24.871 "runtime": 2.004083, 00:35:24.871 "iops": 5481.309905827254, 00:35:24.871 "mibps": 685.1637382284067, 00:35:24.871 "io_failed": 0, 00:35:24.871 "io_timeout": 0, 00:35:24.871 "avg_latency_us": 2914.587762571857, 00:35:24.871 "min_latency_us": 676.5985185185185, 00:35:24.871 "max_latency_us": 10631.395555555555 00:35:24.871 } 00:35:24.871 ], 00:35:24.871 "core_count": 1 00:35:24.871 } 00:35:24.871 08:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:24.871 08:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:24.871 08:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:24.871 08:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:24.871 | .driver_specific 00:35:24.871 | .nvme_error 00:35:24.871 | .status_code 00:35:24.871 | .command_transient_transport_error' 00:35:25.154 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 354 > 0 )) 00:35:25.154 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 888811 00:35:25.154 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 888811 ']' 00:35:25.154 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 888811 00:35:25.154 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:25.154 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:25.154 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 888811 00:35:25.154 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:25.154 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:25.154 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 888811' 00:35:25.154 killing process with pid 888811 00:35:25.154 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 888811 00:35:25.154 Received shutdown signal, test time was about 2.000000 seconds 00:35:25.154 00:35:25.154 Latency(us) 00:35:25.154 [2024-11-18T07:09:18.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.154 [2024-11-18T07:09:18.242Z] =================================================================================================================== 00:35:25.154 [2024-11-18T07:09:18.242Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:25.154 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 888811 00:35:25.413 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:25.413 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:25.413 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:25.413 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:25.413 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:25.413 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=889333 00:35:25.413 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:25.413 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 889333 /var/tmp/bperf.sock 00:35:25.413 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 889333 ']' 00:35:25.413 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:25.413 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:25.413 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:25.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:25.413 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:25.413 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:25.413 [2024-11-18 08:09:18.309901] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:25.413 [2024-11-18 08:09:18.309987] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid889333 ] 00:35:25.413 [2024-11-18 08:09:18.377952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.413 [2024-11-18 08:09:18.426068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:25.673 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:25.673 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:25.673 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:25.673 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:25.931 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:25.931 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.931 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:25.931 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.931 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:25.931 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:26.189 nvme0n1 00:35:26.189 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:26.189 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.189 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:26.189 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.189 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:26.189 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:26.448 Running I/O for 2 seconds... 00:35:26.448 [2024-11-18 08:09:19.390706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166f2510 00:35:26.448 [2024-11-18 08:09:19.391901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.448 [2024-11-18 08:09:19.391959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:26.448 [2024-11-18 08:09:19.403271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166f8618 00:35:26.448 [2024-11-18 08:09:19.404596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.448 [2024-11-18 08:09:19.404642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:26.449 [2024-11-18 08:09:19.414829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166f1ca0 00:35:26.449 [2024-11-18 08:09:19.415989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.449 [2024-11-18 08:09:19.416047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:26.449 [2024-11-18 08:09:19.426967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166df118 00:35:26.449 [2024-11-18 08:09:19.427943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.449 [2024-11-18 08:09:19.427988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:26.449 [2024-11-18 08:09:19.438962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166f35f0 00:35:26.449 [2024-11-18 08:09:19.440170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.449 [2024-11-18 08:09:19.440215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:26.449 [2024-11-18 08:09:19.451374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166f81e0 00:35:26.449 [2024-11-18 08:09:19.452662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.449 [2024-11-18 08:09:19.452707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:26.449 [2024-11-18 08:09:19.462526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166f9b30 00:35:26.449 [2024-11-18 08:09:19.463776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.449 [2024-11-18 08:09:19.463820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:26.449 [2024-11-18 08:09:19.474870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166e1f80 00:35:26.449 [2024-11-18 08:09:19.476257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.449 [2024-11-18 08:09:19.476302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:26.449 [2024-11-18 08:09:19.486512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166f9b30 00:35:26.449 [2024-11-18 08:09:19.487823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.449 [2024-11-18 08:09:19.487868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:26.449 [2024-11-18 08:09:19.498692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166eea00 00:35:26.449 [2024-11-18 08:09:19.500021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.449 [2024-11-18 08:09:19.500065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:26.449 [2024-11-18 08:09:19.508433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166f8a50 00:35:26.449 [2024-11-18 08:09:19.509221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.449 [2024-11-18 08:09:19.509251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:26.449 [2024-11-18 08:09:19.520550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166e1b48 00:35:26.449 [2024-11-18 08:09:19.521653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.449 [2024-11-18 08:09:19.521683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:26.449 [2024-11-18 08:09:19.533129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166e27f0 00:35:26.449 [2024-11-18 08:09:19.534397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.449 [2024-11-18 08:09:19.534441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.547565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166e4de8 00:35:26.710 [2024-11-18 08:09:19.549348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.549392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.555872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166fc560 00:35:26.710 [2024-11-18 08:09:19.556804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.556835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.570312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166f1430 00:35:26.710 [2024-11-18 08:09:19.571773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.571818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.580353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ec840 00:35:26.710 [2024-11-18 08:09:19.581066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.581096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.592663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166e01f8 00:35:26.710 [2024-11-18 08:09:19.593529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.593559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.604130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166f6890 00:35:26.710 [2024-11-18 08:09:19.605244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.605275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.615900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166df988 00:35:26.710 [2024-11-18 08:09:19.616925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.616970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.627257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ebfd0 00:35:26.710 [2024-11-18 08:09:19.628075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.628118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.639082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166eee38 00:35:26.710 [2024-11-18 08:09:19.640038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.640081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.653358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166e01f8 00:35:26.710 [2024-11-18 08:09:19.654977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.655022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.665645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166fcdd0 00:35:26.710 [2024-11-18 08:09:19.667229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.667275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.676313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166dfdc0 00:35:26.710 [2024-11-18 08:09:19.677613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.677643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.688189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166dece0 00:35:26.710 [2024-11-18 08:09:19.689321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.689365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.699392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166f4b08 00:35:26.710 [2024-11-18 08:09:19.700376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.700419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.711049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166eaab8 00:35:26.710 [2024-11-18 08:09:19.712195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.712238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.723291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166e0ea0 00:35:26.710 [2024-11-18 08:09:19.724398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.724448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.734825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166f7100 00:35:26.710 [2024-11-18 08:09:19.735938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.735983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.749165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166fc998 00:35:26.710 [2024-11-18 08:09:19.750978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.751023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.757620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166eff18 00:35:26.710 [2024-11-18 08:09:19.758422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.758465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.770196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166e84c0 00:35:26.710 [2024-11-18 08:09:19.771328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.771357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.783126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:26.710 [2024-11-18 08:09:19.783717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.783748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.710 [2024-11-18 08:09:19.797026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:26.710 [2024-11-18 08:09:19.797230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.710 [2024-11-18 08:09:19.797271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.971 [2024-11-18 08:09:19.810845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:26.971 [2024-11-18 08:09:19.811084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.971 [2024-11-18 08:09:19.811128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.971 [2024-11-18 08:09:19.824867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:26.971 [2024-11-18 08:09:19.825116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.971 [2024-11-18 08:09:19.825146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.971 [2024-11-18 08:09:19.838739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:26.971 [2024-11-18 08:09:19.838996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.971 [2024-11-18 08:09:19.839048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.971 [2024-11-18 08:09:19.852719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:26.971 [2024-11-18 08:09:19.852961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.971 [2024-11-18 08:09:19.853003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.971 [2024-11-18 08:09:19.866892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:26.971 [2024-11-18 08:09:19.867154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.971 [2024-11-18 08:09:19.867200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.971 [2024-11-18 08:09:19.880931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:26.971 [2024-11-18 08:09:19.881177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.971 [2024-11-18 08:09:19.881205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.971 [2024-11-18 08:09:19.894903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:26.972 [2024-11-18 08:09:19.895139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.972 [2024-11-18 08:09:19.895167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.972 [2024-11-18 08:09:19.908931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:26.972 [2024-11-18 08:09:19.909198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.972 [2024-11-18 08:09:19.909241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.972 [2024-11-18 08:09:19.922632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:26.972 [2024-11-18 08:09:19.922852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.972 [2024-11-18 08:09:19.922880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.972 [2024-11-18 08:09:19.936534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:26.972 [2024-11-18 08:09:19.936775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.972 [2024-11-18 08:09:19.936818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.972 [2024-11-18 08:09:19.950436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:26.972 [2024-11-18 08:09:19.950674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.972 [2024-11-18 08:09:19.950717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.972 [2024-11-18 08:09:19.964488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:26.972 [2024-11-18 08:09:19.964714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.972 [2024-11-18 08:09:19.964743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.972 [2024-11-18 08:09:19.978488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:26.972 [2024-11-18 08:09:19.978753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.972 [2024-11-18 08:09:19.978783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.972 [2024-11-18 08:09:19.992480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:26.972 [2024-11-18 08:09:19.992740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.972 [2024-11-18 08:09:19.992775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.972 [2024-11-18 08:09:20.006133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:26.972 [2024-11-18 08:09:20.006330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.972 [2024-11-18 08:09:20.006361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.972 [2024-11-18 08:09:20.019071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:26.972 [2024-11-18 08:09:20.019262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.972 [2024-11-18 08:09:20.019291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.972 [2024-11-18 08:09:20.032789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:26.972 [2024-11-18 08:09:20.033023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.972 [2024-11-18 08:09:20.033058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.972 [2024-11-18 08:09:20.046197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:26.972 [2024-11-18 08:09:20.046504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.972 [2024-11-18 08:09:20.046555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.972 [2024-11-18 08:09:20.059858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.233 [2024-11-18 08:09:20.060120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.233 [2024-11-18 08:09:20.060152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.233 [2024-11-18 08:09:20.073854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.233 [2024-11-18 08:09:20.074081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.233 [2024-11-18 08:09:20.074124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.233 [2024-11-18 08:09:20.087982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.233 [2024-11-18 08:09:20.088284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.233 [2024-11-18 08:09:20.088330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.233 [2024-11-18 08:09:20.101925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.233 [2024-11-18 08:09:20.102196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.233 [2024-11-18 08:09:20.102241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.233 [2024-11-18 08:09:20.115986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.233 [2024-11-18 08:09:20.116265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.233 [2024-11-18 08:09:20.116296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.233 [2024-11-18 08:09:20.129974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.233 [2024-11-18 08:09:20.130224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.233 [2024-11-18 08:09:20.130267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.233 [2024-11-18 08:09:20.143874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.233 [2024-11-18 08:09:20.144053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.233 [2024-11-18 08:09:20.144095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.233 [2024-11-18 08:09:20.157791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.233 [2024-11-18 08:09:20.158027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.233 [2024-11-18 08:09:20.158056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.233 [2024-11-18 08:09:20.171724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.233 [2024-11-18 08:09:20.172008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.233 [2024-11-18 08:09:20.172053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.233 [2024-11-18 08:09:20.185316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.233 [2024-11-18 08:09:20.185576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.233 [2024-11-18 08:09:20.185607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.233 [2024-11-18 08:09:20.199186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.233 [2024-11-18 08:09:20.199456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.233 [2024-11-18 08:09:20.199501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.233 [2024-11-18 08:09:20.213209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.233 [2024-11-18 08:09:20.213585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.233 [2024-11-18 08:09:20.213615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.233 [2024-11-18 08:09:20.227010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.233 [2024-11-18 08:09:20.227250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.233 [2024-11-18 08:09:20.227294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.233 [2024-11-18 08:09:20.240859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.233 [2024-11-18 08:09:20.241132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.233 [2024-11-18 08:09:20.241162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.233 [2024-11-18 08:09:20.254613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.233 [2024-11-18 08:09:20.254844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.233 [2024-11-18 08:09:20.254888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.233 [2024-11-18 08:09:20.268257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.233 [2024-11-18 08:09:20.268518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.233 [2024-11-18 08:09:20.268548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.233 [2024-11-18 08:09:20.282121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.233 [2024-11-18 08:09:20.282382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.233 [2024-11-18 08:09:20.282426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.234 [2024-11-18 08:09:20.296098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.234 [2024-11-18 08:09:20.296345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.234 [2024-11-18 08:09:20.296373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.234 [2024-11-18 08:09:20.309856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.234 [2024-11-18 08:09:20.310110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.234 [2024-11-18 08:09:20.310153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.495 [2024-11-18 08:09:20.323819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.495 [2024-11-18 08:09:20.324033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.495 [2024-11-18 08:09:20.324066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.495 [2024-11-18 08:09:20.337711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.495 [2024-11-18 08:09:20.337941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.495 [2024-11-18 08:09:20.337985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.495 [2024-11-18 08:09:20.351656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.495 [2024-11-18 08:09:20.351894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.495 [2024-11-18 08:09:20.351923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.495 [2024-11-18 08:09:20.365838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.495 [2024-11-18 08:09:20.366103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.495 [2024-11-18 08:09:20.366148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.495 19473.00 IOPS, 76.07 MiB/s [2024-11-18T07:09:20.583Z] [2024-11-18 08:09:20.380013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.495 [2024-11-18 08:09:20.380252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.495 [2024-11-18 08:09:20.380294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.495 [2024-11-18 08:09:20.394037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.495 [2024-11-18 08:09:20.394243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.495 [2024-11-18 08:09:20.394284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.495 [2024-11-18 08:09:20.407984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.495 [2024-11-18 08:09:20.408220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.495 [2024-11-18 08:09:20.408264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.495 [2024-11-18 08:09:20.422084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.495 [2024-11-18 08:09:20.422343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.495 [2024-11-18 08:09:20.422386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.495 [2024-11-18 08:09:20.435605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.495 [2024-11-18 08:09:20.435895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.495 [2024-11-18 08:09:20.435937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.495 [2024-11-18 08:09:20.449634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.495 [2024-11-18 08:09:20.449856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.495 [2024-11-18 08:09:20.449898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.495 [2024-11-18 08:09:20.463498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.495 [2024-11-18 08:09:20.463718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.495 [2024-11-18 08:09:20.463747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.495 [2024-11-18 08:09:20.477449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.495 [2024-11-18 08:09:20.477756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.495 [2024-11-18 08:09:20.477802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.495 [2024-11-18 08:09:20.491468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.495 [2024-11-18 08:09:20.491707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.495 [2024-11-18 08:09:20.491750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.495 [2024-11-18 08:09:20.505445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.495 [2024-11-18 08:09:20.505692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.495 [2024-11-18 08:09:20.505721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.495 [2024-11-18 08:09:20.519254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.495 [2024-11-18 08:09:20.519522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.495 [2024-11-18 08:09:20.519562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.495 [2024-11-18 08:09:20.533391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.495 [2024-11-18 08:09:20.533707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.495 [2024-11-18 08:09:20.533737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.495 [2024-11-18 08:09:20.547623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.495 [2024-11-18 08:09:20.547878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.495 [2024-11-18 08:09:20.547922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.495 [2024-11-18 08:09:20.561700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.495 [2024-11-18 08:09:20.561961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.495 [2024-11-18 08:09:20.562012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.495 [2024-11-18 08:09:20.575675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.495 [2024-11-18 08:09:20.575922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.495 [2024-11-18 08:09:20.575966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.757 [2024-11-18 08:09:20.589527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.757 [2024-11-18 08:09:20.589781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.757 [2024-11-18 08:09:20.589812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.757 [2024-11-18 08:09:20.603619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.757 [2024-11-18 08:09:20.603896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.757 [2024-11-18 08:09:20.603940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.757 [2024-11-18 08:09:20.617595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.757 [2024-11-18 08:09:20.617837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.757 [2024-11-18 08:09:20.617866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.757 [2024-11-18 08:09:20.631799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.757 [2024-11-18 08:09:20.632061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.757 [2024-11-18 08:09:20.632106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.757 [2024-11-18 08:09:20.645729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.757 [2024-11-18 08:09:20.645929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.757 [2024-11-18 08:09:20.645955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.757 [2024-11-18 08:09:20.659781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.757 [2024-11-18 08:09:20.660103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.758 [2024-11-18 08:09:20.660149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.758 [2024-11-18 08:09:20.674066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.758 [2024-11-18 08:09:20.674305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.758 [2024-11-18 08:09:20.674332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.758 [2024-11-18 08:09:20.688115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.758 [2024-11-18 08:09:20.688437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.758 [2024-11-18 08:09:20.688465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.758 [2024-11-18 08:09:20.702190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.758 [2024-11-18 08:09:20.702434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.758 [2024-11-18 08:09:20.702477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.758 [2024-11-18 08:09:20.716167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.758 [2024-11-18 08:09:20.716412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.758 [2024-11-18 08:09:20.716440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.758 [2024-11-18 08:09:20.730114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.758 [2024-11-18 08:09:20.730354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.758 [2024-11-18 08:09:20.730397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.758 [2024-11-18 08:09:20.744170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.758 [2024-11-18 08:09:20.744432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.758 [2024-11-18 08:09:20.744460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.758 [2024-11-18 08:09:20.757828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.758 [2024-11-18 08:09:20.758087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.758 [2024-11-18 08:09:20.758132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.758 [2024-11-18 08:09:20.771715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.758 [2024-11-18 08:09:20.771939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.758 [2024-11-18 08:09:20.771967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.758 [2024-11-18 08:09:20.785666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.758 [2024-11-18 08:09:20.785958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.758 [2024-11-18 08:09:20.786000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.758 [2024-11-18 08:09:20.799651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.758 [2024-11-18 08:09:20.799963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.758 [2024-11-18 08:09:20.800005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.758 [2024-11-18 08:09:20.813737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.758 [2024-11-18 08:09:20.813975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.758 [2024-11-18 08:09:20.814018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.758 [2024-11-18 08:09:20.827870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.758 [2024-11-18 08:09:20.828160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.758 [2024-11-18 08:09:20.828204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:27.758 [2024-11-18 08:09:20.841911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:27.758 [2024-11-18 08:09:20.842163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.758 [2024-11-18 08:09:20.842206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.018 [2024-11-18 08:09:20.855918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.018 [2024-11-18 08:09:20.856156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.018 [2024-11-18 08:09:20.856183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.018 [2024-11-18 08:09:20.869952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.018 [2024-11-18 08:09:20.870195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.018 [2024-11-18 08:09:20.870239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.018 [2024-11-18 08:09:20.883927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.018 [2024-11-18 08:09:20.884174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.018 [2024-11-18 08:09:20.884216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.018 [2024-11-18 08:09:20.897996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.018 [2024-11-18 08:09:20.898235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.018 [2024-11-18 08:09:20.898277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.018 [2024-11-18 08:09:20.911785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.018 [2024-11-18 08:09:20.912051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.018 [2024-11-18 08:09:20.912094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.018 [2024-11-18 08:09:20.925812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.018 [2024-11-18 08:09:20.926051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.018 [2024-11-18 08:09:20.926098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.018 [2024-11-18 08:09:20.939900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.018 [2024-11-18 08:09:20.940146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.018 [2024-11-18 08:09:20.940189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.018 [2024-11-18 08:09:20.953980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.018 [2024-11-18 08:09:20.954217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.018 [2024-11-18 08:09:20.954259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.018 [2024-11-18 08:09:20.967947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.018 [2024-11-18 08:09:20.968233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.018 [2024-11-18 08:09:20.968276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.018 [2024-11-18 08:09:20.981689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.018 [2024-11-18 08:09:20.981919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.018 [2024-11-18 08:09:20.981946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.018 [2024-11-18 08:09:20.995669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.018 [2024-11-18 08:09:20.995933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.018 [2024-11-18 08:09:20.995974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.018 [2024-11-18 08:09:21.009578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.018 [2024-11-18 08:09:21.009810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.018 [2024-11-18 08:09:21.009836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.018 [2024-11-18 08:09:21.023680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.018 [2024-11-18 08:09:21.023929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.018 [2024-11-18 08:09:21.023972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.018 [2024-11-18 08:09:21.037516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.018 [2024-11-18 08:09:21.037773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.018 [2024-11-18 08:09:21.037815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.018 [2024-11-18 08:09:21.051627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.019 [2024-11-18 08:09:21.051867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.019 [2024-11-18 08:09:21.051910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.019 [2024-11-18 08:09:21.065587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.019 [2024-11-18 08:09:21.065836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.019 [2024-11-18 08:09:21.065883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.019 [2024-11-18 08:09:21.079650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.019 [2024-11-18 08:09:21.079883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.019 [2024-11-18 08:09:21.079912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.019 [2024-11-18 08:09:21.093615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.019 [2024-11-18 08:09:21.093858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.019 [2024-11-18 08:09:21.093899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.279 [2024-11-18 08:09:21.107612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.279 [2024-11-18 08:09:21.107974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.279 [2024-11-18 08:09:21.108015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.279 [2024-11-18 08:09:21.121535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.279 [2024-11-18 08:09:21.121753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.279 [2024-11-18 08:09:21.121782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.279 [2024-11-18 08:09:21.135344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.279 [2024-11-18 08:09:21.135644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.279 [2024-11-18 08:09:21.135673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.279 [2024-11-18 08:09:21.149257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.279 [2024-11-18 08:09:21.149465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.279 [2024-11-18 08:09:21.149512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.279 [2024-11-18 08:09:21.163208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.279 [2024-11-18 08:09:21.163435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.279 [2024-11-18 08:09:21.163477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.279 [2024-11-18 08:09:21.177163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.279 [2024-11-18 08:09:21.177397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.279 [2024-11-18 08:09:21.177438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.279 [2024-11-18 08:09:21.191022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.279 [2024-11-18 08:09:21.191249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.279 [2024-11-18 08:09:21.191293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.279 [2024-11-18 08:09:21.205067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.279 [2024-11-18 08:09:21.205299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.279 [2024-11-18 08:09:21.205342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.279 [2024-11-18 08:09:21.218976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.279 [2024-11-18 08:09:21.219202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.279 [2024-11-18 08:09:21.219246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.279 [2024-11-18 08:09:21.232806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.279 [2024-11-18 08:09:21.233052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.279 [2024-11-18 08:09:21.233095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.279 [2024-11-18 08:09:21.246812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.279 [2024-11-18 08:09:21.247041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.279 [2024-11-18 08:09:21.247082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.279 [2024-11-18 08:09:21.260646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.279 [2024-11-18 08:09:21.260926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.279 [2024-11-18 08:09:21.260970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.279 [2024-11-18 08:09:21.274486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.279 [2024-11-18 08:09:21.274712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.279 [2024-11-18 08:09:21.274741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.279 [2024-11-18 08:09:21.288288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.279 [2024-11-18 08:09:21.288773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.279 [2024-11-18 08:09:21.288804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.279 [2024-11-18 08:09:21.302188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.279 [2024-11-18 08:09:21.302433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.279 [2024-11-18 08:09:21.302475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.279 [2024-11-18 08:09:21.316129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.279 [2024-11-18 08:09:21.316361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.279 [2024-11-18 08:09:21.316388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.279 [2024-11-18 08:09:21.330019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.279 [2024-11-18 08:09:21.330267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.279 [2024-11-18 08:09:21.330310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.279 [2024-11-18 08:09:21.343963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.279 [2024-11-18 08:09:21.344198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.279 [2024-11-18 08:09:21.344225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.279 [2024-11-18 08:09:21.357877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.279 [2024-11-18 08:09:21.358105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.279 [2024-11-18 08:09:21.358145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.539 [2024-11-18 08:09:21.371786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8022c0) with pdu=0x2000166ea248 00:35:28.539 [2024-11-18 08:09:21.372026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.539 [2024-11-18 08:09:21.372068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.539 18884.50 IOPS, 73.77 MiB/s 00:35:28.539 Latency(us) 00:35:28.539 [2024-11-18T07:09:21.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.539 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:28.539 nvme0n1 : 2.01 18886.83 73.78 0.00 0.00 6762.63 2682.12 16602.45 00:35:28.539 [2024-11-18T07:09:21.627Z] =================================================================================================================== 00:35:28.539 [2024-11-18T07:09:21.627Z] Total : 18886.83 73.78 0.00 0.00 6762.63 2682.12 16602.45 00:35:28.539 { 00:35:28.539 "results": [ 00:35:28.539 { 00:35:28.539 "job": "nvme0n1", 00:35:28.539 "core_mask": "0x2", 00:35:28.539 "workload": "randwrite", 00:35:28.539 "status": "finished", 00:35:28.539 "queue_depth": 128, 00:35:28.539 "io_size": 4096, 00:35:28.539 "runtime": 2.00653, 00:35:28.539 "iops": 18886.834485405154, 00:35:28.539 "mibps": 73.77669720861388, 00:35:28.539 "io_failed": 0, 00:35:28.539 "io_timeout": 0, 00:35:28.539 "avg_latency_us": 6762.62854509152, 00:35:28.539 "min_latency_us": 2682.1214814814816, 00:35:28.539 "max_latency_us": 16602.453333333335 00:35:28.539 } 00:35:28.539 ], 00:35:28.539 "core_count": 1 00:35:28.539 } 00:35:28.539 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:28.539 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:28.539 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:28.539 | .driver_specific 00:35:28.539 | .nvme_error 00:35:28.539 | .status_code 00:35:28.539 | .command_transient_transport_error' 00:35:28.539 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:28.798 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 148 > 0 )) 00:35:28.798 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 889333 00:35:28.798 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 889333 ']' 00:35:28.798 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 889333 00:35:28.798 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:28.798 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:28.798 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 889333 00:35:28.798 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:28.798 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:28.798 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 889333' 00:35:28.798 killing process with pid 889333 00:35:28.798 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 889333 00:35:28.798 Received shutdown signal, test time was about 2.000000 seconds 00:35:28.798 00:35:28.798 Latency(us) 00:35:28.798 [2024-11-18T07:09:21.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.798 [2024-11-18T07:09:21.886Z] =================================================================================================================== 00:35:28.798 [2024-11-18T07:09:21.886Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:28.799 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 889333 00:35:29.057 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:29.057 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:29.057 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:29.057 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:29.057 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:29.057 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=889743 00:35:29.057 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:29.057 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 889743 /var/tmp/bperf.sock 00:35:29.057 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 889743 ']' 00:35:29.057 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:29.057 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:29.057 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:29.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:29.057 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:29.057 08:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:29.057 [2024-11-18 08:09:21.957579] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:29.057 [2024-11-18 08:09:21.957661] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid889743 ] 00:35:29.057 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:29.057 Zero copy mechanism will not be used. 00:35:29.057 [2024-11-18 08:09:22.024507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:29.057 [2024-11-18 08:09:22.071247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:29.315 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:29.315 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:29.315 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:29.315 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:29.574 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:29.574 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.574 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:29.574 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.574 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:29.574 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:29.832 nvme0n1 00:35:29.832 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:29.832 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.832 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:29.832 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.832 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:29.832 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:29.832 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:29.832 Zero copy mechanism will not be used. 00:35:29.832 Running I/O for 2 seconds... 00:35:30.092 [2024-11-18 08:09:22.922458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.092 [2024-11-18 08:09:22.922583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.092 [2024-11-18 08:09:22.922626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.092 [2024-11-18 08:09:22.928389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.092 [2024-11-18 08:09:22.928534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.092 [2024-11-18 08:09:22.928569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.092 [2024-11-18 08:09:22.934275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.092 [2024-11-18 08:09:22.934348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.092 [2024-11-18 08:09:22.934376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.092 [2024-11-18 08:09:22.940179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:22.940257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:22.940285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:22.945449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:22.945554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:22.945585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:22.950821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:22.950911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:22.950939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:22.956025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:22.956115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:22.956142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:22.961355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:22.961462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:22.961516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:22.966519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:22.966637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:22.966667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:22.971677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:22.971759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:22.971790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:22.976901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:22.976988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:22.977015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:22.982071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:22.982156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:22.982187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:22.987272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:22.987349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:22.987377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:22.992405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:22.992523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:22.992552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:22.997553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:22.997642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:22.997671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:23.003378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:23.003456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:23.003485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:23.008935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:23.009052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:23.009079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:23.014337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:23.014416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:23.014444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:23.019342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:23.019417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:23.019450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:23.024460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:23.024563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:23.024591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:23.030367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:23.030441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:23.030469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:23.035591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:23.035680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:23.035709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:23.040656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:23.040760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:23.040803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:23.045858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:23.045952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:23.045980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:23.051119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:23.051216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:23.051243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:23.056189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:23.056291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:23.056318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:23.061244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:23.061342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:23.061370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:23.066587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:23.066709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:23.066742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:23.071905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:23.072040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:23.072069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.093 [2024-11-18 08:09:23.077653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.093 [2024-11-18 08:09:23.077764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.093 [2024-11-18 08:09:23.077793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.094 [2024-11-18 08:09:23.084057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.094 [2024-11-18 08:09:23.084237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-11-18 08:09:23.084265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.094 [2024-11-18 08:09:23.090627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.094 [2024-11-18 08:09:23.090843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-11-18 08:09:23.090874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.094 [2024-11-18 08:09:23.097867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.094 [2024-11-18 08:09:23.098037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-11-18 08:09:23.098065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.094 [2024-11-18 08:09:23.104755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.094 [2024-11-18 08:09:23.104871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-11-18 08:09:23.104898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.094 [2024-11-18 08:09:23.112216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.094 [2024-11-18 08:09:23.112419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-11-18 08:09:23.112447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.094 [2024-11-18 08:09:23.119450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.094 [2024-11-18 08:09:23.119629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-11-18 08:09:23.119657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.094 [2024-11-18 08:09:23.125737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.094 [2024-11-18 08:09:23.125910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-11-18 08:09:23.125955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.094 [2024-11-18 08:09:23.131970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.094 [2024-11-18 08:09:23.132205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-11-18 08:09:23.132251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.094 [2024-11-18 08:09:23.138537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.094 [2024-11-18 08:09:23.138730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-11-18 08:09:23.138758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.094 [2024-11-18 08:09:23.145099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.094 [2024-11-18 08:09:23.145315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-11-18 08:09:23.145343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.094 [2024-11-18 08:09:23.152300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.094 [2024-11-18 08:09:23.152541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-11-18 08:09:23.152588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.094 [2024-11-18 08:09:23.159254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.094 [2024-11-18 08:09:23.159436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-11-18 08:09:23.159463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.094 [2024-11-18 08:09:23.165657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.094 [2024-11-18 08:09:23.165855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-11-18 08:09:23.165883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.094 [2024-11-18 08:09:23.171924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.094 [2024-11-18 08:09:23.172029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-11-18 08:09:23.172057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.094 [2024-11-18 08:09:23.177656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.094 [2024-11-18 08:09:23.177811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-11-18 08:09:23.177839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.355 [2024-11-18 08:09:23.182829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.355 [2024-11-18 08:09:23.182957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.355 [2024-11-18 08:09:23.182985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.355 [2024-11-18 08:09:23.188414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.355 [2024-11-18 08:09:23.188534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.355 [2024-11-18 08:09:23.188563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.193654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.193787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.193815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.198814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.198985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.199014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.205154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.205309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.205337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.211416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.211583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.211612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.217740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.217864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.217891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.224050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.224203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.224230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.230413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.230547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.230581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.236647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.236833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.236860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.242894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.243069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.243096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.249228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.249386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.249413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.255447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.255641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.255670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.261719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.261885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.261914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.268117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.268271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.268301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.274464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.274677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.274708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.280778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.280941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.280971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.286650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.286927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.286956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.293188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.293511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.293542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.300341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.300679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.300709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.305532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.305850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.305881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.310452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.310761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.310803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.315253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.315565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.315596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.320031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.320336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.320366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.324905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.325140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.325171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.329598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.329895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.329926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.334287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.334603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.334633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.339062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.339356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.356 [2024-11-18 08:09:23.339387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.356 [2024-11-18 08:09:23.343904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.356 [2024-11-18 08:09:23.344194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.357 [2024-11-18 08:09:23.344224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.357 [2024-11-18 08:09:23.348765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.357 [2024-11-18 08:09:23.349039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.357 [2024-11-18 08:09:23.349068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.357 [2024-11-18 08:09:23.353557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.357 [2024-11-18 08:09:23.353862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.357 [2024-11-18 08:09:23.353892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.357 [2024-11-18 08:09:23.358288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.357 [2024-11-18 08:09:23.358597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.357 [2024-11-18 08:09:23.358628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.357 [2024-11-18 08:09:23.362873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.357 [2024-11-18 08:09:23.363122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.357 [2024-11-18 08:09:23.363150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.357 [2024-11-18 08:09:23.367326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.357 [2024-11-18 08:09:23.367570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.357 [2024-11-18 08:09:23.367601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.357 [2024-11-18 08:09:23.372139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.357 [2024-11-18 08:09:23.372365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.357 [2024-11-18 08:09:23.372401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.357 [2024-11-18 08:09:23.377108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.357 [2024-11-18 08:09:23.377333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.357 [2024-11-18 08:09:23.377362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.357 [2024-11-18 08:09:23.382172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.357 [2024-11-18 08:09:23.382402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.357 [2024-11-18 08:09:23.382431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.357 [2024-11-18 08:09:23.387283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.357 [2024-11-18 08:09:23.387565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.357 [2024-11-18 08:09:23.387596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.357 [2024-11-18 08:09:23.392870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.357 [2024-11-18 08:09:23.393095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.357 [2024-11-18 08:09:23.393128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.357 [2024-11-18 08:09:23.398259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.357 [2024-11-18 08:09:23.398558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.357 [2024-11-18 08:09:23.398591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.357 [2024-11-18 08:09:23.404563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.357 [2024-11-18 08:09:23.404805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.357 [2024-11-18 08:09:23.404836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.357 [2024-11-18 08:09:23.409973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.357 [2024-11-18 08:09:23.410244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.357 [2024-11-18 08:09:23.410274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.357 [2024-11-18 08:09:23.415223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.357 [2024-11-18 08:09:23.415607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.357 [2024-11-18 08:09:23.415637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.357 [2024-11-18 08:09:23.420330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.357 [2024-11-18 08:09:23.420599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.357 [2024-11-18 08:09:23.420630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.357 [2024-11-18 08:09:23.425347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.357 [2024-11-18 08:09:23.425666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.357 [2024-11-18 08:09:23.425711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.357 [2024-11-18 08:09:23.430708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.357 [2024-11-18 08:09:23.430989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.357 [2024-11-18 08:09:23.431019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.357 [2024-11-18 08:09:23.436180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.357 [2024-11-18 08:09:23.436459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.357 [2024-11-18 08:09:23.436501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.357 [2024-11-18 08:09:23.441529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.357 [2024-11-18 08:09:23.441766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.357 [2024-11-18 08:09:23.441810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.618 [2024-11-18 08:09:23.447034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.618 [2024-11-18 08:09:23.447354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.618 [2024-11-18 08:09:23.447383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.618 [2024-11-18 08:09:23.452469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.618 [2024-11-18 08:09:23.452728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.618 [2024-11-18 08:09:23.452763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.618 [2024-11-18 08:09:23.457983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.618 [2024-11-18 08:09:23.458254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.618 [2024-11-18 08:09:23.458284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.618 [2024-11-18 08:09:23.463360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.618 [2024-11-18 08:09:23.463596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.618 [2024-11-18 08:09:23.463631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.618 [2024-11-18 08:09:23.468795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.618 [2024-11-18 08:09:23.469091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.618 [2024-11-18 08:09:23.469120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.618 [2024-11-18 08:09:23.474296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.618 [2024-11-18 08:09:23.474554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.618 [2024-11-18 08:09:23.474586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.618 [2024-11-18 08:09:23.479655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.480000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.480032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.485063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.485313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.485343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.490170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.490403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.490430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.495468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.495746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.495776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.500828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.501030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.501058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.506145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.506357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.506402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.511340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.511597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.511633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.516790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.516997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.517037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.522152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.522415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.522446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.527432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.527663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.527694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.532761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.532997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.533026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.538181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.538447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.538499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.543304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.543589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.543620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.548722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.548990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.549020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.554054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.554284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.554312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.559399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.559666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.559697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.564785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.564999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.565026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.569928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.570172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.570206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.575202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.575426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.575458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.580617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.580808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.580836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.586048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.586335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.586368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.591357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.591659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.591690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.596572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.596823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.596869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.601872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.602036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.602078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.607235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.607552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.607599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.612590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.612849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.612878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.617741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.617924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.617954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.622932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.623132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.623159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.628255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.628447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.628498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.633788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.634072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.634104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.639040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.639302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.639332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.644348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.644629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.644660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.649601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.649855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.649891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.654810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.655018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.655045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.660033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.660279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.660311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.665452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.665738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.665772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.670696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.670953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.670982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.676107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.676390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.676420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.681252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.681606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.681637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.686805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.687021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.687051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.692047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.692257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.692286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.697362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.697563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.697595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.619 [2024-11-18 08:09:23.702591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.619 [2024-11-18 08:09:23.702825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.619 [2024-11-18 08:09:23.702855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.879 [2024-11-18 08:09:23.707853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.879 [2024-11-18 08:09:23.708105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.879 [2024-11-18 08:09:23.708135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.879 [2024-11-18 08:09:23.712990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.879 [2024-11-18 08:09:23.713170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.879 [2024-11-18 08:09:23.713197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.879 [2024-11-18 08:09:23.718154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.879 [2024-11-18 08:09:23.718377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.879 [2024-11-18 08:09:23.718407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.879 [2024-11-18 08:09:23.723397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.879 [2024-11-18 08:09:23.723593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.879 [2024-11-18 08:09:23.723624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.879 [2024-11-18 08:09:23.728865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.879 [2024-11-18 08:09:23.729064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.879 [2024-11-18 08:09:23.729090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.879 [2024-11-18 08:09:23.734156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.879 [2024-11-18 08:09:23.734386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.879 [2024-11-18 08:09:23.734416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.879 [2024-11-18 08:09:23.739380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.879 [2024-11-18 08:09:23.739631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.879 [2024-11-18 08:09:23.739661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.879 [2024-11-18 08:09:23.744582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.879 [2024-11-18 08:09:23.744825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.879 [2024-11-18 08:09:23.744855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.879 [2024-11-18 08:09:23.749789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.879 [2024-11-18 08:09:23.750057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.879 [2024-11-18 08:09:23.750091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.879 [2024-11-18 08:09:23.755043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.879 [2024-11-18 08:09:23.755258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.879 [2024-11-18 08:09:23.755285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.879 [2024-11-18 08:09:23.760348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.879 [2024-11-18 08:09:23.760600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.879 [2024-11-18 08:09:23.760628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.879 [2024-11-18 08:09:23.765683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.879 [2024-11-18 08:09:23.765890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.879 [2024-11-18 08:09:23.765919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.879 [2024-11-18 08:09:23.770908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.879 [2024-11-18 08:09:23.771071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.879 [2024-11-18 08:09:23.771100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.879 [2024-11-18 08:09:23.776188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.879 [2024-11-18 08:09:23.776422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.879 [2024-11-18 08:09:23.776451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.879 [2024-11-18 08:09:23.781570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.879 [2024-11-18 08:09:23.781777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.879 [2024-11-18 08:09:23.781806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.786880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.787119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.787154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.792016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.792232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.792260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.797195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.797407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.797439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.802508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.802730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.802773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.807814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.808095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.808128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.813194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.813391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.813419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.818330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.818653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.818683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.823590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.823823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.823882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.828835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.829017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.829045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.834128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.834376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.834410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.839204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.839433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.839462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.844589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.844762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.844792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.850000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.850276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.850332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.855087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.855310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.855337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.860412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.860672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.860703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.865757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.866013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.866045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.870963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.871204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.871244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.876300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.876533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.876563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.881538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.881722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.881752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.886783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.886982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.887010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.891951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.892142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.892171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.897116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.897397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.897429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.902461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.902685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.902715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.907866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.908095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.908129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.912978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.913198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.913227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.918141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.918288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.918315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.880 5662.00 IOPS, 707.75 MiB/s [2024-11-18T07:09:23.968Z] [2024-11-18 08:09:23.923940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.924043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.924076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.928967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.929144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.929174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.934038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.934207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.934237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.939369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.939581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.939609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.944486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.944731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.944761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.950753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.950955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.951003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.956462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.956563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.956591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.961939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:30.880 [2024-11-18 08:09:23.962111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.880 [2024-11-18 08:09:23.962143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.880 [2024-11-18 08:09:23.967577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.140 [2024-11-18 08:09:23.967709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.140 [2024-11-18 08:09:23.967739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.140 [2024-11-18 08:09:23.972654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.140 [2024-11-18 08:09:23.972723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.140 [2024-11-18 08:09:23.972756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.140 [2024-11-18 08:09:23.977585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.140 [2024-11-18 08:09:23.977703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.140 [2024-11-18 08:09:23.977732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.140 [2024-11-18 08:09:23.982101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.140 [2024-11-18 08:09:23.982189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.140 [2024-11-18 08:09:23.982216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.140 [2024-11-18 08:09:23.986578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.140 [2024-11-18 08:09:23.986670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.140 [2024-11-18 08:09:23.986699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.140 [2024-11-18 08:09:23.990900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.140 [2024-11-18 08:09:23.990991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.140 [2024-11-18 08:09:23.991018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.140 [2024-11-18 08:09:23.995285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.140 [2024-11-18 08:09:23.995378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.140 [2024-11-18 08:09:23.995405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.140 [2024-11-18 08:09:23.999679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.140 [2024-11-18 08:09:23.999765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.140 [2024-11-18 08:09:23.999793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.140 [2024-11-18 08:09:24.004108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.140 [2024-11-18 08:09:24.004194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.140 [2024-11-18 08:09:24.004224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.140 [2024-11-18 08:09:24.008434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.140 [2024-11-18 08:09:24.008525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.140 [2024-11-18 08:09:24.008557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.140 [2024-11-18 08:09:24.012659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.140 [2024-11-18 08:09:24.012743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.140 [2024-11-18 08:09:24.012787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.140 [2024-11-18 08:09:24.016892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.140 [2024-11-18 08:09:24.016977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.140 [2024-11-18 08:09:24.017004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.140 [2024-11-18 08:09:24.021307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.140 [2024-11-18 08:09:24.021381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.140 [2024-11-18 08:09:24.021407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.140 [2024-11-18 08:09:24.025636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.140 [2024-11-18 08:09:24.025746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.140 [2024-11-18 08:09:24.025773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.140 [2024-11-18 08:09:24.029977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.140 [2024-11-18 08:09:24.030071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.140 [2024-11-18 08:09:24.030097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.140 [2024-11-18 08:09:24.034358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.140 [2024-11-18 08:09:24.034445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.140 [2024-11-18 08:09:24.034503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.140 [2024-11-18 08:09:24.038692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.140 [2024-11-18 08:09:24.038764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.140 [2024-11-18 08:09:24.038792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.140 [2024-11-18 08:09:24.043050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.140 [2024-11-18 08:09:24.043144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.140 [2024-11-18 08:09:24.043171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.140 [2024-11-18 08:09:24.047277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.140 [2024-11-18 08:09:24.047353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.140 [2024-11-18 08:09:24.047379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.140 [2024-11-18 08:09:24.051649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.140 [2024-11-18 08:09:24.051724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.140 [2024-11-18 08:09:24.051751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.140 [2024-11-18 08:09:24.055979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.140 [2024-11-18 08:09:24.056058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.140 [2024-11-18 08:09:24.056085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.060273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.060355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.060382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.064538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.064620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.064663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.068952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.069052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.069080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.073317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.073390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.073418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.077674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.077763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.077813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.081971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.082078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.082106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.086540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.086713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.086748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.091771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.091897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.091926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.096945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.097125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.097157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.103001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.103211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.103241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.108582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.108690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.108720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.114112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.114274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.114303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.119346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.119524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.119554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.124537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.124701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.124732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.129903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.130017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.130046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.135202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.135364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.135392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.140433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.140572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.140602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.145762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.145899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.145927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.151100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.151237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.151266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.156286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.156431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.156460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.161504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.161668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.161697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.166699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.166886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.166916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.171824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.171940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.171972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.176902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.177018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.177046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.182206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.182385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.182415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.187439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.187589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.187617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.192481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.141 [2024-11-18 08:09:24.192684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.141 [2024-11-18 08:09:24.192714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.141 [2024-11-18 08:09:24.197596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.142 [2024-11-18 08:09:24.197766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.142 [2024-11-18 08:09:24.197797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.142 [2024-11-18 08:09:24.203658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.142 [2024-11-18 08:09:24.203862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.142 [2024-11-18 08:09:24.203892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.142 [2024-11-18 08:09:24.208826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.142 [2024-11-18 08:09:24.208962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.142 [2024-11-18 08:09:24.208990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.142 [2024-11-18 08:09:24.213302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.142 [2024-11-18 08:09:24.213425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.142 [2024-11-18 08:09:24.213454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.142 [2024-11-18 08:09:24.217794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.142 [2024-11-18 08:09:24.217948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.142 [2024-11-18 08:09:24.217979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.142 [2024-11-18 08:09:24.223116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.142 [2024-11-18 08:09:24.223221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.142 [2024-11-18 08:09:24.223256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.402 [2024-11-18 08:09:24.228206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.402 [2024-11-18 08:09:24.228321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.402 [2024-11-18 08:09:24.228350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.402 [2024-11-18 08:09:24.232822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.402 [2024-11-18 08:09:24.232980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.402 [2024-11-18 08:09:24.233012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.402 [2024-11-18 08:09:24.237419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.402 [2024-11-18 08:09:24.237579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.402 [2024-11-18 08:09:24.237611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.402 [2024-11-18 08:09:24.241934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.402 [2024-11-18 08:09:24.242117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.402 [2024-11-18 08:09:24.242145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.402 [2024-11-18 08:09:24.246551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.402 [2024-11-18 08:09:24.246735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.402 [2024-11-18 08:09:24.246765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.402 [2024-11-18 08:09:24.251132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.402 [2024-11-18 08:09:24.251256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.402 [2024-11-18 08:09:24.251285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.402 [2024-11-18 08:09:24.255543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.255694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.255739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.260434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.260601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.260630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.265691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.265934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.265963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.271404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.271653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.271685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.276683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.276879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.276909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.281276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.281436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.281466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.285894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.286037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.286066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.290420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.290602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.290632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.294809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.294948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.294977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.299872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.300125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.300156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.304909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.305081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.305110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.310253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.310439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.310469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.315423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.315617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.315648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.321462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.321590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.321620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.327102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.327320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.327353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.331754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.331939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.331969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.336206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.336334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.336363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.341377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.341559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.341590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.346151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.346279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.346308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.350647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.350779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.350815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.355161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.355351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.355380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.359760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.359910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.359939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.364073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.364208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.364237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.368404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.368582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.368612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.372705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.372865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.372894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.377121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.377283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.377312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.382335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.403 [2024-11-18 08:09:24.382537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.403 [2024-11-18 08:09:24.382567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.403 [2024-11-18 08:09:24.387150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.387260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.387289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.391462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.391603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.391633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.395810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.395938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.395967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.400206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.400324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.400352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.404627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.404739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.404768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.408914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.409042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.409071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.413130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.413240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.413269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.417463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.417634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.417663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.421911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.422059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.422087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.426176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.426304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.426333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.430408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.430571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.430600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.434739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.434880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.434908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.439030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.439143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.439170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.443506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.443631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.443660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.447851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.447972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.448000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.452259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.452381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.452411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.456544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.456684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.456714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.460857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.460983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.461013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.465294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.465398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.465436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.469648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.469753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.469808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.474017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.474126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.474157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.478393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.478559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.478590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.482849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.482975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.483004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.404 [2024-11-18 08:09:24.487143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.404 [2024-11-18 08:09:24.487274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.404 [2024-11-18 08:09:24.487305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.665 [2024-11-18 08:09:24.491483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.665 [2024-11-18 08:09:24.491624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.665 [2024-11-18 08:09:24.491655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.665 [2024-11-18 08:09:24.495806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.665 [2024-11-18 08:09:24.495954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.665 [2024-11-18 08:09:24.495980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.665 [2024-11-18 08:09:24.500211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.665 [2024-11-18 08:09:24.500341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.665 [2024-11-18 08:09:24.500370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.665 [2024-11-18 08:09:24.504613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.665 [2024-11-18 08:09:24.504735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.665 [2024-11-18 08:09:24.504764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.665 [2024-11-18 08:09:24.509014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.665 [2024-11-18 08:09:24.509152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.665 [2024-11-18 08:09:24.509195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.665 [2024-11-18 08:09:24.513382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.665 [2024-11-18 08:09:24.513536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.665 [2024-11-18 08:09:24.513566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.665 [2024-11-18 08:09:24.517694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.665 [2024-11-18 08:09:24.517815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.665 [2024-11-18 08:09:24.517859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.665 [2024-11-18 08:09:24.522088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.665 [2024-11-18 08:09:24.522204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.665 [2024-11-18 08:09:24.522250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.665 [2024-11-18 08:09:24.526458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.665 [2024-11-18 08:09:24.526617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.665 [2024-11-18 08:09:24.526648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.665 [2024-11-18 08:09:24.530855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.665 [2024-11-18 08:09:24.530971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.665 [2024-11-18 08:09:24.531000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.665 [2024-11-18 08:09:24.535309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.665 [2024-11-18 08:09:24.535423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.665 [2024-11-18 08:09:24.535452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.665 [2024-11-18 08:09:24.539671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.665 [2024-11-18 08:09:24.539804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.665 [2024-11-18 08:09:24.539832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.665 [2024-11-18 08:09:24.544014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.665 [2024-11-18 08:09:24.544135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.665 [2024-11-18 08:09:24.544163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.665 [2024-11-18 08:09:24.548188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.665 [2024-11-18 08:09:24.548300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.665 [2024-11-18 08:09:24.548328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.665 [2024-11-18 08:09:24.552429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.665 [2024-11-18 08:09:24.552568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.665 [2024-11-18 08:09:24.552610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.665 [2024-11-18 08:09:24.556703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.665 [2024-11-18 08:09:24.556838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.665 [2024-11-18 08:09:24.556867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.665 [2024-11-18 08:09:24.560965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.665 [2024-11-18 08:09:24.561101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.665 [2024-11-18 08:09:24.561144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.665 [2024-11-18 08:09:24.565310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.665 [2024-11-18 08:09:24.565435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.665 [2024-11-18 08:09:24.565467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.665 [2024-11-18 08:09:24.569752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.569899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.569927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.574165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.574286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.574314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.578442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.578583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.578618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.582774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.582903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.582931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.587069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.587200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.587228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.591310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.591419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.591449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.595523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.595643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.595672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.599944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.600064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.600107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.604297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.604417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.604460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.608588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.608702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.608731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.612876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.612988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.613017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.617061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.617212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.617255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.621436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.621573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.621603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.625780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.625929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.625958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.630071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.630195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.630223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.634419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.634545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.634575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.638868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.638974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.639003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.643334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.643461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.643512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.647637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.647754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.647784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.652141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.652285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.652329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.656402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.656539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.656569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.660795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.660903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.660932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.665047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.665191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.665219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.669374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.669507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.669539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.673806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.673937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.673979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.678265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.678377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.678405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.682379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.666 [2024-11-18 08:09:24.682483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.666 [2024-11-18 08:09:24.682538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.666 [2024-11-18 08:09:24.686659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.667 [2024-11-18 08:09:24.686779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.667 [2024-11-18 08:09:24.686825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.667 [2024-11-18 08:09:24.690922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.667 [2024-11-18 08:09:24.691046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.667 [2024-11-18 08:09:24.691082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.667 [2024-11-18 08:09:24.695150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.667 [2024-11-18 08:09:24.695271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.667 [2024-11-18 08:09:24.695301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.667 [2024-11-18 08:09:24.699459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.667 [2024-11-18 08:09:24.699607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.667 [2024-11-18 08:09:24.699636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.667 [2024-11-18 08:09:24.703902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.667 [2024-11-18 08:09:24.704018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.667 [2024-11-18 08:09:24.704046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.667 [2024-11-18 08:09:24.708194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.667 [2024-11-18 08:09:24.708308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.667 [2024-11-18 08:09:24.708341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.667 [2024-11-18 08:09:24.712571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.667 [2024-11-18 08:09:24.712689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.667 [2024-11-18 08:09:24.712718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.667 [2024-11-18 08:09:24.716920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.667 [2024-11-18 08:09:24.717035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.667 [2024-11-18 08:09:24.717063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.667 [2024-11-18 08:09:24.721122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.667 [2024-11-18 08:09:24.721237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.667 [2024-11-18 08:09:24.721265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.667 [2024-11-18 08:09:24.725602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.667 [2024-11-18 08:09:24.725735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.667 [2024-11-18 08:09:24.725765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.667 [2024-11-18 08:09:24.730030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.667 [2024-11-18 08:09:24.730147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.667 [2024-11-18 08:09:24.730181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.667 [2024-11-18 08:09:24.734323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.667 [2024-11-18 08:09:24.734431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.667 [2024-11-18 08:09:24.734459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.667 [2024-11-18 08:09:24.738573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.667 [2024-11-18 08:09:24.738695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.667 [2024-11-18 08:09:24.738725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.667 [2024-11-18 08:09:24.743043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.667 [2024-11-18 08:09:24.743170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.667 [2024-11-18 08:09:24.743198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.667 [2024-11-18 08:09:24.747310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.667 [2024-11-18 08:09:24.747438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.667 [2024-11-18 08:09:24.747467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.667 [2024-11-18 08:09:24.751695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.667 [2024-11-18 08:09:24.751824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.667 [2024-11-18 08:09:24.751863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.927 [2024-11-18 08:09:24.756038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.927 [2024-11-18 08:09:24.756162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.927 [2024-11-18 08:09:24.756190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.927 [2024-11-18 08:09:24.760530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.927 [2024-11-18 08:09:24.760655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.927 [2024-11-18 08:09:24.760685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.927 [2024-11-18 08:09:24.764943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.927 [2024-11-18 08:09:24.765059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.927 [2024-11-18 08:09:24.765088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.927 [2024-11-18 08:09:24.769129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.927 [2024-11-18 08:09:24.769248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.927 [2024-11-18 08:09:24.769277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.927 [2024-11-18 08:09:24.773388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.927 [2024-11-18 08:09:24.773532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.927 [2024-11-18 08:09:24.773565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.927 [2024-11-18 08:09:24.777792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.927 [2024-11-18 08:09:24.777926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.927 [2024-11-18 08:09:24.777954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.927 [2024-11-18 08:09:24.782151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.927 [2024-11-18 08:09:24.782270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.927 [2024-11-18 08:09:24.782299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.927 [2024-11-18 08:09:24.786548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.927 [2024-11-18 08:09:24.786669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.927 [2024-11-18 08:09:24.786699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.927 [2024-11-18 08:09:24.790924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.927 [2024-11-18 08:09:24.791041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.927 [2024-11-18 08:09:24.791069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.927 [2024-11-18 08:09:24.795225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.927 [2024-11-18 08:09:24.795359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.927 [2024-11-18 08:09:24.795387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.927 [2024-11-18 08:09:24.799580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.927 [2024-11-18 08:09:24.799719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.927 [2024-11-18 08:09:24.799748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.927 [2024-11-18 08:09:24.803911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.927 [2024-11-18 08:09:24.804025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.927 [2024-11-18 08:09:24.804054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.927 [2024-11-18 08:09:24.808151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.927 [2024-11-18 08:09:24.808255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.927 [2024-11-18 08:09:24.808285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.927 [2024-11-18 08:09:24.812631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.927 [2024-11-18 08:09:24.812753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.927 [2024-11-18 08:09:24.812782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.927 [2024-11-18 08:09:24.816950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.927 [2024-11-18 08:09:24.817066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.927 [2024-11-18 08:09:24.817096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.927 [2024-11-18 08:09:24.821319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.927 [2024-11-18 08:09:24.821437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.927 [2024-11-18 08:09:24.821465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.927 [2024-11-18 08:09:24.825857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.927 [2024-11-18 08:09:24.825966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.927 [2024-11-18 08:09:24.825994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.927 [2024-11-18 08:09:24.830198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.927 [2024-11-18 08:09:24.830317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.927 [2024-11-18 08:09:24.830346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.928 [2024-11-18 08:09:24.834655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.928 [2024-11-18 08:09:24.834771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.928 [2024-11-18 08:09:24.834800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.928 [2024-11-18 08:09:24.839022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.928 [2024-11-18 08:09:24.839134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.928 [2024-11-18 08:09:24.839162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.928 [2024-11-18 08:09:24.843680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.928 [2024-11-18 08:09:24.843894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.928 [2024-11-18 08:09:24.843943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.928 [2024-11-18 08:09:24.848834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.928 [2024-11-18 08:09:24.848953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.928 [2024-11-18 08:09:24.848982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.928 [2024-11-18 08:09:24.854447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.928 [2024-11-18 08:09:24.854701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.928 [2024-11-18 08:09:24.854731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.928 [2024-11-18 08:09:24.859767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.928 [2024-11-18 08:09:24.859936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.928 [2024-11-18 08:09:24.859965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.928 [2024-11-18 08:09:24.864877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.928 [2024-11-18 08:09:24.865110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.928 [2024-11-18 08:09:24.865139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.928 [2024-11-18 08:09:24.870029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.928 [2024-11-18 08:09:24.870177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.928 [2024-11-18 08:09:24.870205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.928 [2024-11-18 08:09:24.875192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.928 [2024-11-18 08:09:24.875374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.928 [2024-11-18 08:09:24.875403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.928 [2024-11-18 08:09:24.880418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.928 [2024-11-18 08:09:24.880596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.928 [2024-11-18 08:09:24.880626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.928 [2024-11-18 08:09:24.885591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.928 [2024-11-18 08:09:24.885810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.928 [2024-11-18 08:09:24.885839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.928 [2024-11-18 08:09:24.890853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.928 [2024-11-18 08:09:24.891001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.928 [2024-11-18 08:09:24.891030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.928 [2024-11-18 08:09:24.896901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.928 [2024-11-18 08:09:24.897107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.928 [2024-11-18 08:09:24.897135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.928 [2024-11-18 08:09:24.902207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.928 [2024-11-18 08:09:24.902378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.928 [2024-11-18 08:09:24.902407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.928 [2024-11-18 08:09:24.907524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.928 [2024-11-18 08:09:24.907669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.928 [2024-11-18 08:09:24.907698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.928 [2024-11-18 08:09:24.912726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.928 [2024-11-18 08:09:24.912879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.928 [2024-11-18 08:09:24.912907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.928 [2024-11-18 08:09:24.917891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.928 [2024-11-18 08:09:24.918068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.928 [2024-11-18 08:09:24.918097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.928 [2024-11-18 08:09:24.922959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x802600) with pdu=0x2000166ff3c8 00:35:31.928 [2024-11-18 08:09:24.923148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.928 [2024-11-18 08:09:24.923177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.928 6170.00 IOPS, 771.25 MiB/s 00:35:31.928 Latency(us) 00:35:31.928 [2024-11-18T07:09:25.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:31.928 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:31.928 nvme0n1 : 2.00 6167.63 770.95 0.00 0.00 2586.98 1893.26 7427.41 00:35:31.928 [2024-11-18T07:09:25.016Z] =================================================================================================================== 00:35:31.928 [2024-11-18T07:09:25.016Z] Total : 6167.63 770.95 0.00 0.00 2586.98 1893.26 7427.41 00:35:31.928 { 00:35:31.928 "results": [ 00:35:31.928 { 00:35:31.928 "job": "nvme0n1", 00:35:31.928 "core_mask": "0x2", 00:35:31.928 "workload": "randwrite", 00:35:31.928 "status": "finished", 00:35:31.928 "queue_depth": 16, 00:35:31.928 "io_size": 131072, 00:35:31.928 "runtime": 2.004174, 00:35:31.928 "iops": 6167.628160030017, 00:35:31.929 "mibps": 770.9535200037521, 00:35:31.929 "io_failed": 0, 00:35:31.929 "io_timeout": 0, 00:35:31.929 "avg_latency_us": 2586.9756087095916, 00:35:31.929 "min_latency_us": 1893.2622222222221, 00:35:31.929 "max_latency_us": 7427.413333333333 00:35:31.929 } 00:35:31.929 ], 00:35:31.929 "core_count": 1 00:35:31.929 } 00:35:31.929 08:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:31.929 08:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:31.929 08:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:31.929 | .driver_specific 00:35:31.929 | .nvme_error 00:35:31.929 | .status_code 00:35:31.929 | .command_transient_transport_error' 00:35:31.929 08:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:32.188 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 399 > 0 )) 00:35:32.188 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 889743 00:35:32.188 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 889743 ']' 00:35:32.188 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 889743 00:35:32.188 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:32.188 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:32.188 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 889743 00:35:32.188 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:32.188 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:32.188 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 889743' 00:35:32.188 killing process with pid 889743 00:35:32.188 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 889743 00:35:32.188 Received shutdown signal, test time was about 2.000000 seconds 00:35:32.188 00:35:32.188 Latency(us) 00:35:32.188 [2024-11-18T07:09:25.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:32.188 [2024-11-18T07:09:25.276Z] =================================================================================================================== 00:35:32.188 [2024-11-18T07:09:25.276Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:32.188 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 889743 00:35:32.447 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 888374 00:35:32.447 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 888374 ']' 00:35:32.447 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 888374 00:35:32.447 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:32.447 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:32.447 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 888374 00:35:32.447 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:32.447 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:32.447 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 888374' 00:35:32.447 killing process with pid 888374 00:35:32.447 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 888374 00:35:32.447 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 888374 00:35:32.706 00:35:32.706 real 0m15.154s 00:35:32.706 user 0m30.170s 00:35:32.706 sys 0m4.440s 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:32.706 ************************************ 00:35:32.706 END TEST nvmf_digest_error 00:35:32.706 ************************************ 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:32.706 rmmod nvme_tcp 00:35:32.706 rmmod nvme_fabrics 00:35:32.706 rmmod nvme_keyring 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 888374 ']' 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 888374 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 888374 ']' 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 888374 00:35:32.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (888374) - No such process 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 888374 is not found' 00:35:32.706 Process with pid 888374 is not found 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:32.706 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:35.246 08:09:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:35.246 00:35:35.246 real 0m35.461s 00:35:35.246 user 1m2.275s 00:35:35.246 sys 0m10.463s 00:35:35.246 08:09:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:35.246 08:09:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:35.246 ************************************ 00:35:35.246 END TEST nvmf_digest 00:35:35.246 ************************************ 00:35:35.246 08:09:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:35.246 08:09:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:35.246 08:09:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:35.246 08:09:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:35.246 08:09:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:35.246 08:09:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:35.246 08:09:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.246 ************************************ 00:35:35.246 START TEST nvmf_bdevperf 00:35:35.246 ************************************ 00:35:35.246 08:09:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:35.246 * Looking for test storage... 00:35:35.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:35.246 08:09:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:35.246 08:09:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:35:35.246 08:09:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:35.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.246 --rc genhtml_branch_coverage=1 00:35:35.246 --rc genhtml_function_coverage=1 00:35:35.246 --rc genhtml_legend=1 00:35:35.246 --rc geninfo_all_blocks=1 00:35:35.246 --rc geninfo_unexecuted_blocks=1 00:35:35.246 00:35:35.246 ' 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:35.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.246 --rc genhtml_branch_coverage=1 00:35:35.246 --rc genhtml_function_coverage=1 00:35:35.246 --rc genhtml_legend=1 00:35:35.246 --rc geninfo_all_blocks=1 00:35:35.246 --rc geninfo_unexecuted_blocks=1 00:35:35.246 00:35:35.246 ' 00:35:35.246 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:35.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.246 --rc genhtml_branch_coverage=1 00:35:35.246 --rc genhtml_function_coverage=1 00:35:35.246 --rc genhtml_legend=1 00:35:35.246 --rc geninfo_all_blocks=1 00:35:35.246 --rc geninfo_unexecuted_blocks=1 00:35:35.246 00:35:35.246 ' 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:35.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.247 --rc genhtml_branch_coverage=1 00:35:35.247 --rc genhtml_function_coverage=1 00:35:35.247 --rc genhtml_legend=1 00:35:35.247 --rc geninfo_all_blocks=1 00:35:35.247 --rc geninfo_unexecuted_blocks=1 00:35:35.247 00:35:35.247 ' 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:35.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:35.247 08:09:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.151 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:37.151 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:37.151 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:37.151 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:37.151 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:37.151 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:37.151 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:37.151 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:37.152 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:37.152 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:37.152 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:37.152 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:37.152 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:37.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:37.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:35:37.410 00:35:37.410 --- 10.0.0.2 ping statistics --- 00:35:37.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:37.410 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:37.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:37.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:35:37.410 00:35:37.410 --- 10.0.0.1 ping statistics --- 00:35:37.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:37.410 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=892103 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 892103 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 892103 ']' 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:37.410 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:37.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:37.411 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:37.411 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.411 [2024-11-18 08:09:30.379868] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:37.411 [2024-11-18 08:09:30.379971] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:37.411 [2024-11-18 08:09:30.470205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:37.669 [2024-11-18 08:09:30.518329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:37.669 [2024-11-18 08:09:30.518402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:37.669 [2024-11-18 08:09:30.518415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:37.669 [2024-11-18 08:09:30.518440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:37.669 [2024-11-18 08:09:30.518449] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:37.669 [2024-11-18 08:09:30.519892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:37.669 [2024-11-18 08:09:30.519956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:37.669 [2024-11-18 08:09:30.519960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.669 [2024-11-18 08:09:30.656440] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.669 Malloc0 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.669 [2024-11-18 08:09:30.718186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:37.669 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.670 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:37.670 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:37.670 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:37.670 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:37.670 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:37.670 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:37.670 { 00:35:37.670 "params": { 00:35:37.670 "name": "Nvme$subsystem", 00:35:37.670 "trtype": "$TEST_TRANSPORT", 00:35:37.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:37.670 "adrfam": "ipv4", 00:35:37.670 "trsvcid": "$NVMF_PORT", 00:35:37.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:37.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:37.670 "hdgst": ${hdgst:-false}, 00:35:37.670 "ddgst": ${ddgst:-false} 00:35:37.670 }, 00:35:37.670 "method": "bdev_nvme_attach_controller" 00:35:37.670 } 00:35:37.670 EOF 00:35:37.670 )") 00:35:37.670 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:37.670 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:37.670 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:37.670 08:09:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:37.670 "params": { 00:35:37.670 "name": "Nvme1", 00:35:37.670 "trtype": "tcp", 00:35:37.670 "traddr": "10.0.0.2", 00:35:37.670 "adrfam": "ipv4", 00:35:37.670 "trsvcid": "4420", 00:35:37.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:37.670 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:37.670 "hdgst": false, 00:35:37.670 "ddgst": false 00:35:37.670 }, 00:35:37.670 "method": "bdev_nvme_attach_controller" 00:35:37.670 }' 00:35:37.927 [2024-11-18 08:09:30.768285] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:37.928 [2024-11-18 08:09:30.768368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid892167 ] 00:35:37.928 [2024-11-18 08:09:30.843692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:37.928 [2024-11-18 08:09:30.891691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:38.185 Running I/O for 1 seconds... 00:35:39.121 8272.00 IOPS, 32.31 MiB/s 00:35:39.121 Latency(us) 00:35:39.121 [2024-11-18T07:09:32.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:39.121 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:39.121 Verification LBA range: start 0x0 length 0x4000 00:35:39.121 Nvme1n1 : 1.01 8357.61 32.65 0.00 0.00 15255.29 546.13 15340.28 00:35:39.121 [2024-11-18T07:09:32.209Z] =================================================================================================================== 00:35:39.121 [2024-11-18T07:09:32.209Z] Total : 8357.61 32.65 0.00 0.00 15255.29 546.13 15340.28 00:35:39.382 08:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=892386 00:35:39.382 08:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:39.382 08:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:39.382 08:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:39.382 08:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:39.382 08:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:39.382 08:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:39.382 08:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:39.382 { 00:35:39.382 "params": { 00:35:39.382 "name": "Nvme$subsystem", 00:35:39.382 "trtype": "$TEST_TRANSPORT", 00:35:39.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:39.382 "adrfam": "ipv4", 00:35:39.382 "trsvcid": "$NVMF_PORT", 00:35:39.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:39.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:39.382 "hdgst": ${hdgst:-false}, 00:35:39.382 "ddgst": ${ddgst:-false} 00:35:39.382 }, 00:35:39.382 "method": "bdev_nvme_attach_controller" 00:35:39.382 } 00:35:39.382 EOF 00:35:39.382 )") 00:35:39.382 08:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:39.382 08:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:39.382 08:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:39.382 08:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:39.382 "params": { 00:35:39.382 "name": "Nvme1", 00:35:39.382 "trtype": "tcp", 00:35:39.382 "traddr": "10.0.0.2", 00:35:39.382 "adrfam": "ipv4", 00:35:39.382 "trsvcid": "4420", 00:35:39.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:39.382 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:39.382 "hdgst": false, 00:35:39.382 "ddgst": false 00:35:39.382 }, 00:35:39.382 "method": "bdev_nvme_attach_controller" 00:35:39.382 }' 00:35:39.382 [2024-11-18 08:09:32.330079] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:39.382 [2024-11-18 08:09:32.330153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid892386 ] 00:35:39.382 [2024-11-18 08:09:32.400041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.382 [2024-11-18 08:09:32.446645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:39.641 Running I/O for 15 seconds... 00:35:41.587 8510.00 IOPS, 33.24 MiB/s [2024-11-18T07:09:35.615Z] 8670.50 IOPS, 33.87 MiB/s [2024-11-18T07:09:35.615Z] 08:09:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 892103 00:35:42.527 08:09:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:42.527 [2024-11-18 08:09:35.294938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.294998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:50752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.295979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.295991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.296005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.296020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.296034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.296046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.296060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.296072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.296085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.527 [2024-11-18 08:09:35.296103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.527 [2024-11-18 08:09:35.296117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.528 [2024-11-18 08:09:35.296283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:50984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.296984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.296998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.297011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.297028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.297040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.297054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.297071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.297092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.297105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.297118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.297130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.297144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.297159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.297173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.297186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.297199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.297211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.297225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.297238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.297251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.528 [2024-11-18 08:09:35.297263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.528 [2024-11-18 08:09:35.297276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.528 [2024-11-18 08:09:35.297289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.297983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.297996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.298010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.298032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.298046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.298059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.298072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.298084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.298098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.298110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.298123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.298135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.298149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.298161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.298174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.298186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.298199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.298211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.298225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.298243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.298262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.298275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.298288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.298301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.298314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.298326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.298339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.298352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.298365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.298378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.298391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.298403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.529 [2024-11-18 08:09:35.298416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.529 [2024-11-18 08:09:35.298429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.530 [2024-11-18 08:09:35.298442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.530 [2024-11-18 08:09:35.298459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.530 [2024-11-18 08:09:35.298502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.530 [2024-11-18 08:09:35.298520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.530 [2024-11-18 08:09:35.298535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.530 [2024-11-18 08:09:35.298550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.530 [2024-11-18 08:09:35.298564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.530 [2024-11-18 08:09:35.298578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.530 [2024-11-18 08:09:35.298593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.530 [2024-11-18 08:09:35.298607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.530 [2024-11-18 08:09:35.298622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.530 [2024-11-18 08:09:35.298636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.530 [2024-11-18 08:09:35.298655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.530 [2024-11-18 08:09:35.298669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.530 [2024-11-18 08:09:35.298684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.530 [2024-11-18 08:09:35.298698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.530 [2024-11-18 08:09:35.298713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.530 [2024-11-18 08:09:35.298727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.530 [2024-11-18 08:09:35.298742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.530 [2024-11-18 08:09:35.298756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.530 [2024-11-18 08:09:35.298771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.530 [2024-11-18 08:09:35.298804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.530 [2024-11-18 08:09:35.298821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.530 [2024-11-18 08:09:35.298834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.530 [2024-11-18 08:09:35.298848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.530 [2024-11-18 08:09:35.298875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.530 [2024-11-18 08:09:35.298888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.530 [2024-11-18 08:09:35.298900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.530 [2024-11-18 08:09:35.298913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1c20 is same with the state(6) to be set 00:35:42.530 [2024-11-18 08:09:35.298928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.530 [2024-11-18 08:09:35.298938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.530 [2024-11-18 08:09:35.298948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51296 len:8 PRP1 0x0 PRP2 0x0 00:35:42.530 [2024-11-18 08:09:35.298959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.530 [2024-11-18 08:09:35.302190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.530 [2024-11-18 08:09:35.302265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.530 [2024-11-18 08:09:35.302886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.530 [2024-11-18 08:09:35.302925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.530 [2024-11-18 08:09:35.302956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.530 [2024-11-18 08:09:35.303211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.530 [2024-11-18 08:09:35.303427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.530 [2024-11-18 08:09:35.303447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.530 [2024-11-18 08:09:35.303462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.530 [2024-11-18 08:09:35.303511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.530 [2024-11-18 08:09:35.315850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.530 [2024-11-18 08:09:35.316281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.530 [2024-11-18 08:09:35.316334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.530 [2024-11-18 08:09:35.316350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.530 [2024-11-18 08:09:35.316613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.530 [2024-11-18 08:09:35.316852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.530 [2024-11-18 08:09:35.316872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.530 [2024-11-18 08:09:35.316885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.530 [2024-11-18 08:09:35.316898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.530 [2024-11-18 08:09:35.329034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.530 [2024-11-18 08:09:35.329513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.530 [2024-11-18 08:09:35.329542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.530 [2024-11-18 08:09:35.329559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.530 [2024-11-18 08:09:35.329773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.530 [2024-11-18 08:09:35.329991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.530 [2024-11-18 08:09:35.330010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.530 [2024-11-18 08:09:35.330022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.530 [2024-11-18 08:09:35.330034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.530 [2024-11-18 08:09:35.342267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.530 [2024-11-18 08:09:35.342682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.530 [2024-11-18 08:09:35.342711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.530 [2024-11-18 08:09:35.342728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.530 [2024-11-18 08:09:35.342967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.530 [2024-11-18 08:09:35.343160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.530 [2024-11-18 08:09:35.343178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.530 [2024-11-18 08:09:35.343196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.530 [2024-11-18 08:09:35.343208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.530 [2024-11-18 08:09:35.355391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.530 [2024-11-18 08:09:35.355907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.530 [2024-11-18 08:09:35.355949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.530 [2024-11-18 08:09:35.355966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.530 [2024-11-18 08:09:35.356216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.530 [2024-11-18 08:09:35.356408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.530 [2024-11-18 08:09:35.356426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.530 [2024-11-18 08:09:35.356439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.530 [2024-11-18 08:09:35.356450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.530 [2024-11-18 08:09:35.368666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.530 [2024-11-18 08:09:35.369133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.530 [2024-11-18 08:09:35.369195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.530 [2024-11-18 08:09:35.369211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.530 [2024-11-18 08:09:35.369453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.530 [2024-11-18 08:09:35.369675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.530 [2024-11-18 08:09:35.369695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.531 [2024-11-18 08:09:35.369707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.531 [2024-11-18 08:09:35.369719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.531 [2024-11-18 08:09:35.381848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.531 [2024-11-18 08:09:35.382254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.531 [2024-11-18 08:09:35.382319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.531 [2024-11-18 08:09:35.382335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.531 [2024-11-18 08:09:35.382580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.531 [2024-11-18 08:09:35.382778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.531 [2024-11-18 08:09:35.382812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.531 [2024-11-18 08:09:35.382824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.531 [2024-11-18 08:09:35.382836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.531 [2024-11-18 08:09:35.394962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.531 [2024-11-18 08:09:35.395291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.531 [2024-11-18 08:09:35.395318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.531 [2024-11-18 08:09:35.395334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.531 [2024-11-18 08:09:35.395570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.531 [2024-11-18 08:09:35.395791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.531 [2024-11-18 08:09:35.395826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.531 [2024-11-18 08:09:35.395839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.531 [2024-11-18 08:09:35.395851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.531 [2024-11-18 08:09:35.408031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.531 [2024-11-18 08:09:35.408460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.531 [2024-11-18 08:09:35.408487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.531 [2024-11-18 08:09:35.408528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.531 [2024-11-18 08:09:35.408769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.531 [2024-11-18 08:09:35.408995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.531 [2024-11-18 08:09:35.409014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.531 [2024-11-18 08:09:35.409026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.531 [2024-11-18 08:09:35.409038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.531 [2024-11-18 08:09:35.421131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.531 [2024-11-18 08:09:35.421516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.531 [2024-11-18 08:09:35.421543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.531 [2024-11-18 08:09:35.421559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.531 [2024-11-18 08:09:35.421759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.531 [2024-11-18 08:09:35.421985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.531 [2024-11-18 08:09:35.422003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.531 [2024-11-18 08:09:35.422015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.531 [2024-11-18 08:09:35.422027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.531 [2024-11-18 08:09:35.434409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.531 [2024-11-18 08:09:35.434826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.531 [2024-11-18 08:09:35.434873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.531 [2024-11-18 08:09:35.434890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.531 [2024-11-18 08:09:35.435147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.531 [2024-11-18 08:09:35.435346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.531 [2024-11-18 08:09:35.435365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.531 [2024-11-18 08:09:35.435377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.531 [2024-11-18 08:09:35.435389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.531 [2024-11-18 08:09:35.447484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.531 [2024-11-18 08:09:35.447901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.531 [2024-11-18 08:09:35.447942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.531 [2024-11-18 08:09:35.447959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.531 [2024-11-18 08:09:35.448199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.531 [2024-11-18 08:09:35.448407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.531 [2024-11-18 08:09:35.448426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.531 [2024-11-18 08:09:35.448438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.531 [2024-11-18 08:09:35.448449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.531 [2024-11-18 08:09:35.460569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.531 [2024-11-18 08:09:35.460993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.531 [2024-11-18 08:09:35.461035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.531 [2024-11-18 08:09:35.461052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.531 [2024-11-18 08:09:35.461291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.531 [2024-11-18 08:09:35.461524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.531 [2024-11-18 08:09:35.461544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.531 [2024-11-18 08:09:35.461557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.531 [2024-11-18 08:09:35.461583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.531 [2024-11-18 08:09:35.473662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.531 [2024-11-18 08:09:35.474058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.531 [2024-11-18 08:09:35.474085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.531 [2024-11-18 08:09:35.474101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.531 [2024-11-18 08:09:35.474328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.531 [2024-11-18 08:09:35.474563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.531 [2024-11-18 08:09:35.474598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.531 [2024-11-18 08:09:35.474611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.531 [2024-11-18 08:09:35.474624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.531 [2024-11-18 08:09:35.486720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.532 [2024-11-18 08:09:35.487088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.532 [2024-11-18 08:09:35.487129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.532 [2024-11-18 08:09:35.487144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.532 [2024-11-18 08:09:35.487391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.532 [2024-11-18 08:09:35.487629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.532 [2024-11-18 08:09:35.487650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.532 [2024-11-18 08:09:35.487663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.532 [2024-11-18 08:09:35.487675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.532 [2024-11-18 08:09:35.499765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.532 [2024-11-18 08:09:35.500140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.532 [2024-11-18 08:09:35.500183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.532 [2024-11-18 08:09:35.500199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.532 [2024-11-18 08:09:35.500452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.532 [2024-11-18 08:09:35.500697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.532 [2024-11-18 08:09:35.500718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.532 [2024-11-18 08:09:35.500731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.532 [2024-11-18 08:09:35.500743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.532 [2024-11-18 08:09:35.512730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.532 [2024-11-18 08:09:35.513095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.532 [2024-11-18 08:09:35.513138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.532 [2024-11-18 08:09:35.513154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.532 [2024-11-18 08:09:35.513406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.532 [2024-11-18 08:09:35.513641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.532 [2024-11-18 08:09:35.513662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.532 [2024-11-18 08:09:35.513679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.532 [2024-11-18 08:09:35.513692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.532 [2024-11-18 08:09:35.525707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.532 [2024-11-18 08:09:35.526078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.532 [2024-11-18 08:09:35.526121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.532 [2024-11-18 08:09:35.526137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.532 [2024-11-18 08:09:35.526408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.532 [2024-11-18 08:09:35.526653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.532 [2024-11-18 08:09:35.526674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.532 [2024-11-18 08:09:35.526687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.532 [2024-11-18 08:09:35.526699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.532 [2024-11-18 08:09:35.538839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.532 [2024-11-18 08:09:35.539231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.532 [2024-11-18 08:09:35.539258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.532 [2024-11-18 08:09:35.539274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.532 [2024-11-18 08:09:35.539504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.532 [2024-11-18 08:09:35.539719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.532 [2024-11-18 08:09:35.539738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.532 [2024-11-18 08:09:35.539750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.532 [2024-11-18 08:09:35.539762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.532 [2024-11-18 08:09:35.551853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.532 [2024-11-18 08:09:35.552299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.532 [2024-11-18 08:09:35.552326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.532 [2024-11-18 08:09:35.552343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.532 [2024-11-18 08:09:35.552597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.532 [2024-11-18 08:09:35.552840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.532 [2024-11-18 08:09:35.552860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.532 [2024-11-18 08:09:35.552887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.532 [2024-11-18 08:09:35.552899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.532 [2024-11-18 08:09:35.565600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.532 [2024-11-18 08:09:35.565985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.532 [2024-11-18 08:09:35.566013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.532 [2024-11-18 08:09:35.566028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.532 [2024-11-18 08:09:35.566261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.532 [2024-11-18 08:09:35.566484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.532 [2024-11-18 08:09:35.566512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.532 [2024-11-18 08:09:35.566525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.532 [2024-11-18 08:09:35.566552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.532 [2024-11-18 08:09:35.578937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.532 [2024-11-18 08:09:35.579344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.532 [2024-11-18 08:09:35.579384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.532 [2024-11-18 08:09:35.579400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.532 [2024-11-18 08:09:35.579640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.532 [2024-11-18 08:09:35.579881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.532 [2024-11-18 08:09:35.579900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.532 [2024-11-18 08:09:35.579912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.532 [2024-11-18 08:09:35.579924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.532 [2024-11-18 08:09:35.591977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.532 [2024-11-18 08:09:35.592403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.532 [2024-11-18 08:09:35.592445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.532 [2024-11-18 08:09:35.592461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.532 [2024-11-18 08:09:35.592699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.532 [2024-11-18 08:09:35.592944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.532 [2024-11-18 08:09:35.592962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.532 [2024-11-18 08:09:35.592975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.532 [2024-11-18 08:09:35.592987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.532 [2024-11-18 08:09:35.605072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.532 [2024-11-18 08:09:35.605444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.532 [2024-11-18 08:09:35.605477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.532 [2024-11-18 08:09:35.605517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.532 [2024-11-18 08:09:35.605761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.532 [2024-11-18 08:09:35.605987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.532 [2024-11-18 08:09:35.606005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.532 [2024-11-18 08:09:35.606017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.532 [2024-11-18 08:09:35.606029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.793 [2024-11-18 08:09:35.618123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.793 [2024-11-18 08:09:35.618488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.793 [2024-11-18 08:09:35.618522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.793 [2024-11-18 08:09:35.618538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.793 [2024-11-18 08:09:35.618780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.793 [2024-11-18 08:09:35.618989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.793 [2024-11-18 08:09:35.619008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.793 [2024-11-18 08:09:35.619020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.793 [2024-11-18 08:09:35.619032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.793 [2024-11-18 08:09:35.631304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.793 [2024-11-18 08:09:35.631690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.793 [2024-11-18 08:09:35.631718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.793 [2024-11-18 08:09:35.631734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.793 [2024-11-18 08:09:35.631983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.793 [2024-11-18 08:09:35.632190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.793 [2024-11-18 08:09:35.632209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.793 [2024-11-18 08:09:35.632221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.793 [2024-11-18 08:09:35.632233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.793 [2024-11-18 08:09:35.644482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.793 [2024-11-18 08:09:35.644870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.793 [2024-11-18 08:09:35.644898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.793 [2024-11-18 08:09:35.644914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.793 [2024-11-18 08:09:35.645141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.793 [2024-11-18 08:09:35.645363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.793 [2024-11-18 08:09:35.645382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.793 [2024-11-18 08:09:35.645395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.793 [2024-11-18 08:09:35.645408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.793 [2024-11-18 08:09:35.657795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.793 [2024-11-18 08:09:35.658186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.793 [2024-11-18 08:09:35.658214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.793 [2024-11-18 08:09:35.658230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.793 [2024-11-18 08:09:35.658452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.793 [2024-11-18 08:09:35.658702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.793 [2024-11-18 08:09:35.658724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.793 [2024-11-18 08:09:35.658738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.793 [2024-11-18 08:09:35.658751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.793 [2024-11-18 08:09:35.670980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.793 [2024-11-18 08:09:35.671347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.793 [2024-11-18 08:09:35.671390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.793 [2024-11-18 08:09:35.671406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.793 [2024-11-18 08:09:35.671645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.793 [2024-11-18 08:09:35.671884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.793 [2024-11-18 08:09:35.671903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.793 [2024-11-18 08:09:35.671915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.793 [2024-11-18 08:09:35.671926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.793 7572.33 IOPS, 29.58 MiB/s [2024-11-18T07:09:35.881Z] [2024-11-18 08:09:35.684030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.793 [2024-11-18 08:09:35.684397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.793 [2024-11-18 08:09:35.684425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.793 [2024-11-18 08:09:35.684441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.793 [2024-11-18 08:09:35.684695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.793 [2024-11-18 08:09:35.684942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.793 [2024-11-18 08:09:35.684966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.793 [2024-11-18 08:09:35.684979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.793 [2024-11-18 08:09:35.684990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.793 [2024-11-18 08:09:35.697079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.793 [2024-11-18 08:09:35.697441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.793 [2024-11-18 08:09:35.697482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.793 [2024-11-18 08:09:35.697506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.793 [2024-11-18 08:09:35.697761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.793 [2024-11-18 08:09:35.697971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.793 [2024-11-18 08:09:35.697990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.793 [2024-11-18 08:09:35.698002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.793 [2024-11-18 08:09:35.698013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.793 [2024-11-18 08:09:35.710143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.793 [2024-11-18 08:09:35.710565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.793 [2024-11-18 08:09:35.710606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.793 [2024-11-18 08:09:35.710623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.793 [2024-11-18 08:09:35.710857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.793 [2024-11-18 08:09:35.711049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.793 [2024-11-18 08:09:35.711067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.793 [2024-11-18 08:09:35.711080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.793 [2024-11-18 08:09:35.711091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.793 [2024-11-18 08:09:35.723119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.793 [2024-11-18 08:09:35.723609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.793 [2024-11-18 08:09:35.723652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.793 [2024-11-18 08:09:35.723669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.793 [2024-11-18 08:09:35.723919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.793 [2024-11-18 08:09:35.724126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.793 [2024-11-18 08:09:35.724144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.793 [2024-11-18 08:09:35.724156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.793 [2024-11-18 08:09:35.724168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.793 [2024-11-18 08:09:35.736299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.793 [2024-11-18 08:09:35.736733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.794 [2024-11-18 08:09:35.736776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.794 [2024-11-18 08:09:35.736793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.794 [2024-11-18 08:09:35.737032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.794 [2024-11-18 08:09:35.737240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.794 [2024-11-18 08:09:35.737258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.794 [2024-11-18 08:09:35.737270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.794 [2024-11-18 08:09:35.737281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.794 [2024-11-18 08:09:35.749408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.794 [2024-11-18 08:09:35.749878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.794 [2024-11-18 08:09:35.749920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.794 [2024-11-18 08:09:35.749936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.794 [2024-11-18 08:09:35.750204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.794 [2024-11-18 08:09:35.750397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.794 [2024-11-18 08:09:35.750415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.794 [2024-11-18 08:09:35.750427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.794 [2024-11-18 08:09:35.750439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.794 [2024-11-18 08:09:35.762478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.794 [2024-11-18 08:09:35.762907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.794 [2024-11-18 08:09:35.762948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.794 [2024-11-18 08:09:35.762965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.794 [2024-11-18 08:09:35.763205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.794 [2024-11-18 08:09:35.763412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.794 [2024-11-18 08:09:35.763431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.794 [2024-11-18 08:09:35.763443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.794 [2024-11-18 08:09:35.763454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.794 [2024-11-18 08:09:35.775561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.794 [2024-11-18 08:09:35.775990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.794 [2024-11-18 08:09:35.776025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.794 [2024-11-18 08:09:35.776057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.794 [2024-11-18 08:09:35.776295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.794 [2024-11-18 08:09:35.776529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.794 [2024-11-18 08:09:35.776549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.794 [2024-11-18 08:09:35.776562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.794 [2024-11-18 08:09:35.776589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.794 [2024-11-18 08:09:35.788685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.794 [2024-11-18 08:09:35.789066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.794 [2024-11-18 08:09:35.789107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.794 [2024-11-18 08:09:35.789124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.794 [2024-11-18 08:09:35.789345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.794 [2024-11-18 08:09:35.789600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.794 [2024-11-18 08:09:35.789621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.794 [2024-11-18 08:09:35.789634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.794 [2024-11-18 08:09:35.789647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.794 [2024-11-18 08:09:35.801926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.794 [2024-11-18 08:09:35.802264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.794 [2024-11-18 08:09:35.802292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.794 [2024-11-18 08:09:35.802308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.794 [2024-11-18 08:09:35.802541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.794 [2024-11-18 08:09:35.802774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.794 [2024-11-18 08:09:35.802795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.794 [2024-11-18 08:09:35.802809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.794 [2024-11-18 08:09:35.802836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.794 [2024-11-18 08:09:35.815094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.794 [2024-11-18 08:09:35.815519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.794 [2024-11-18 08:09:35.815548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.794 [2024-11-18 08:09:35.815565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.794 [2024-11-18 08:09:35.815809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.794 [2024-11-18 08:09:35.816001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.794 [2024-11-18 08:09:35.816020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.794 [2024-11-18 08:09:35.816032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.794 [2024-11-18 08:09:35.816043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.794 [2024-11-18 08:09:35.828330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.794 [2024-11-18 08:09:35.828698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.794 [2024-11-18 08:09:35.828726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.794 [2024-11-18 08:09:35.828742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.794 [2024-11-18 08:09:35.828980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.794 [2024-11-18 08:09:35.829187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.794 [2024-11-18 08:09:35.829205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.794 [2024-11-18 08:09:35.829218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.794 [2024-11-18 08:09:35.829229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.794 [2024-11-18 08:09:35.841609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.794 [2024-11-18 08:09:35.841994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.794 [2024-11-18 08:09:35.842036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.794 [2024-11-18 08:09:35.842052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.794 [2024-11-18 08:09:35.842299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.794 [2024-11-18 08:09:35.842533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.794 [2024-11-18 08:09:35.842568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.794 [2024-11-18 08:09:35.842581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.794 [2024-11-18 08:09:35.842594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.794 [2024-11-18 08:09:35.854897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.794 [2024-11-18 08:09:35.855215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.794 [2024-11-18 08:09:35.855256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.794 [2024-11-18 08:09:35.855272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.794 [2024-11-18 08:09:35.855487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.794 [2024-11-18 08:09:35.855696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.794 [2024-11-18 08:09:35.855716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.794 [2024-11-18 08:09:35.855734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.794 [2024-11-18 08:09:35.855747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:42.795 [2024-11-18 08:09:35.868141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:42.795 [2024-11-18 08:09:35.868571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.795 [2024-11-18 08:09:35.868601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:42.795 [2024-11-18 08:09:35.868618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:42.795 [2024-11-18 08:09:35.868869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:42.795 [2024-11-18 08:09:35.869077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:42.795 [2024-11-18 08:09:35.869096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:42.795 [2024-11-18 08:09:35.869108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:42.795 [2024-11-18 08:09:35.869120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.057 [2024-11-18 08:09:35.881652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.057 [2024-11-18 08:09:35.882084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.057 [2024-11-18 08:09:35.882135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.057 [2024-11-18 08:09:35.882151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.057 [2024-11-18 08:09:35.882399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.057 [2024-11-18 08:09:35.882638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.057 [2024-11-18 08:09:35.882659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.057 [2024-11-18 08:09:35.882672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.057 [2024-11-18 08:09:35.882684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.057 [2024-11-18 08:09:35.894995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.057 [2024-11-18 08:09:35.895385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.057 [2024-11-18 08:09:35.895427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.057 [2024-11-18 08:09:35.895444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.057 [2024-11-18 08:09:35.895683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.057 [2024-11-18 08:09:35.895920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.057 [2024-11-18 08:09:35.895939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.057 [2024-11-18 08:09:35.895952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.057 [2024-11-18 08:09:35.895964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.057 [2024-11-18 08:09:35.908211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.057 [2024-11-18 08:09:35.908548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.057 [2024-11-18 08:09:35.908575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.057 [2024-11-18 08:09:35.908591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.057 [2024-11-18 08:09:35.908813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.057 [2024-11-18 08:09:35.909026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.057 [2024-11-18 08:09:35.909045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.057 [2024-11-18 08:09:35.909058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.057 [2024-11-18 08:09:35.909069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.057 [2024-11-18 08:09:35.921547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.057 [2024-11-18 08:09:35.921960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.057 [2024-11-18 08:09:35.922002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.057 [2024-11-18 08:09:35.922018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.057 [2024-11-18 08:09:35.922290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.057 [2024-11-18 08:09:35.922515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.057 [2024-11-18 08:09:35.922536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.057 [2024-11-18 08:09:35.922548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.057 [2024-11-18 08:09:35.922560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.057 [2024-11-18 08:09:35.934691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.057 [2024-11-18 08:09:35.935010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.057 [2024-11-18 08:09:35.935051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.057 [2024-11-18 08:09:35.935067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.057 [2024-11-18 08:09:35.935283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.057 [2024-11-18 08:09:35.935520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.057 [2024-11-18 08:09:35.935551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.057 [2024-11-18 08:09:35.935564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.057 [2024-11-18 08:09:35.935576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.057 [2024-11-18 08:09:35.947986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.057 [2024-11-18 08:09:35.948353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.057 [2024-11-18 08:09:35.948400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.057 [2024-11-18 08:09:35.948418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.057 [2024-11-18 08:09:35.948685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.057 [2024-11-18 08:09:35.948916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.057 [2024-11-18 08:09:35.948935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.057 [2024-11-18 08:09:35.948948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.057 [2024-11-18 08:09:35.948959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.057 [2024-11-18 08:09:35.961018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.057 [2024-11-18 08:09:35.961383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.057 [2024-11-18 08:09:35.961425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.057 [2024-11-18 08:09:35.961441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.057 [2024-11-18 08:09:35.961717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.057 [2024-11-18 08:09:35.961949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.057 [2024-11-18 08:09:35.961968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.057 [2024-11-18 08:09:35.961980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.057 [2024-11-18 08:09:35.961992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.058 [2024-11-18 08:09:35.974322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.058 [2024-11-18 08:09:35.974696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.058 [2024-11-18 08:09:35.974725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.058 [2024-11-18 08:09:35.974741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.058 [2024-11-18 08:09:35.974983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.058 [2024-11-18 08:09:35.975182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.058 [2024-11-18 08:09:35.975201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.058 [2024-11-18 08:09:35.975213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.058 [2024-11-18 08:09:35.975225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.058 [2024-11-18 08:09:35.987518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.058 [2024-11-18 08:09:35.987923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.058 [2024-11-18 08:09:35.987951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.058 [2024-11-18 08:09:35.987968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.058 [2024-11-18 08:09:35.988215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.058 [2024-11-18 08:09:35.988432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.058 [2024-11-18 08:09:35.988451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.058 [2024-11-18 08:09:35.988463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.058 [2024-11-18 08:09:35.988475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.058 [2024-11-18 08:09:36.000744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.058 [2024-11-18 08:09:36.001060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.058 [2024-11-18 08:09:36.001086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.058 [2024-11-18 08:09:36.001102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.058 [2024-11-18 08:09:36.001316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.058 [2024-11-18 08:09:36.001534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.058 [2024-11-18 08:09:36.001561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.058 [2024-11-18 08:09:36.001573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.058 [2024-11-18 08:09:36.001584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.058 [2024-11-18 08:09:36.013860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.058 [2024-11-18 08:09:36.014251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.058 [2024-11-18 08:09:36.014278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.058 [2024-11-18 08:09:36.014294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.058 [2024-11-18 08:09:36.014528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.058 [2024-11-18 08:09:36.014748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.058 [2024-11-18 08:09:36.014767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.058 [2024-11-18 08:09:36.014779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.058 [2024-11-18 08:09:36.014792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.058 [2024-11-18 08:09:36.027039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.058 [2024-11-18 08:09:36.027409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.058 [2024-11-18 08:09:36.027436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.058 [2024-11-18 08:09:36.027453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.058 [2024-11-18 08:09:36.027690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.058 [2024-11-18 08:09:36.027931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.058 [2024-11-18 08:09:36.027950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.058 [2024-11-18 08:09:36.027970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.058 [2024-11-18 08:09:36.027983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.058 [2024-11-18 08:09:36.040162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.058 [2024-11-18 08:09:36.040568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.058 [2024-11-18 08:09:36.040596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.058 [2024-11-18 08:09:36.040613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.058 [2024-11-18 08:09:36.040841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.058 [2024-11-18 08:09:36.041056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.058 [2024-11-18 08:09:36.041075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.058 [2024-11-18 08:09:36.041087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.058 [2024-11-18 08:09:36.041099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.058 [2024-11-18 08:09:36.053281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.058 [2024-11-18 08:09:36.053662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.058 [2024-11-18 08:09:36.053691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.058 [2024-11-18 08:09:36.053707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.058 [2024-11-18 08:09:36.053949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.058 [2024-11-18 08:09:36.054153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.058 [2024-11-18 08:09:36.054173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.058 [2024-11-18 08:09:36.054200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.058 [2024-11-18 08:09:36.054212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.058 [2024-11-18 08:09:36.066389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.058 [2024-11-18 08:09:36.066790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.058 [2024-11-18 08:09:36.066819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.058 [2024-11-18 08:09:36.066835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.058 [2024-11-18 08:09:36.067076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.058 [2024-11-18 08:09:36.067270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.058 [2024-11-18 08:09:36.067288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.058 [2024-11-18 08:09:36.067300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.058 [2024-11-18 08:09:36.067312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.058 [2024-11-18 08:09:36.079590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.058 [2024-11-18 08:09:36.079975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.058 [2024-11-18 08:09:36.080016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.058 [2024-11-18 08:09:36.080032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.058 [2024-11-18 08:09:36.080278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.058 [2024-11-18 08:09:36.080510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.058 [2024-11-18 08:09:36.080530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.058 [2024-11-18 08:09:36.080558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.058 [2024-11-18 08:09:36.080571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.058 [2024-11-18 08:09:36.092648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.058 [2024-11-18 08:09:36.093029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.058 [2024-11-18 08:09:36.093070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.058 [2024-11-18 08:09:36.093087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.058 [2024-11-18 08:09:36.093308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.058 [2024-11-18 08:09:36.093543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.058 [2024-11-18 08:09:36.093563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.059 [2024-11-18 08:09:36.093591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.059 [2024-11-18 08:09:36.093604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.059 [2024-11-18 08:09:36.105744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.059 [2024-11-18 08:09:36.106238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.059 [2024-11-18 08:09:36.106280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.059 [2024-11-18 08:09:36.106297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.059 [2024-11-18 08:09:36.106572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.059 [2024-11-18 08:09:36.106799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.059 [2024-11-18 08:09:36.106819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.059 [2024-11-18 08:09:36.106832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.059 [2024-11-18 08:09:36.106844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.059 [2024-11-18 08:09:36.118768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.059 [2024-11-18 08:09:36.119142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.059 [2024-11-18 08:09:36.119175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.059 [2024-11-18 08:09:36.119192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.059 [2024-11-18 08:09:36.119426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.059 [2024-11-18 08:09:36.119666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.059 [2024-11-18 08:09:36.119687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.059 [2024-11-18 08:09:36.119700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.059 [2024-11-18 08:09:36.119712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.059 [2024-11-18 08:09:36.131819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.059 [2024-11-18 08:09:36.132187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.059 [2024-11-18 08:09:36.132229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.059 [2024-11-18 08:09:36.132245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.059 [2024-11-18 08:09:36.132522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.059 [2024-11-18 08:09:36.132727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.059 [2024-11-18 08:09:36.132748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.059 [2024-11-18 08:09:36.132760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.059 [2024-11-18 08:09:36.132772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.320 [2024-11-18 08:09:36.145103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.320 [2024-11-18 08:09:36.145517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.320 [2024-11-18 08:09:36.145558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.320 [2024-11-18 08:09:36.145575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.320 [2024-11-18 08:09:36.145817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.320 [2024-11-18 08:09:36.146025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.320 [2024-11-18 08:09:36.146043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.320 [2024-11-18 08:09:36.146055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.320 [2024-11-18 08:09:36.146067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.320 [2024-11-18 08:09:36.158206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.320 [2024-11-18 08:09:36.158598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.320 [2024-11-18 08:09:36.158626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.320 [2024-11-18 08:09:36.158642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.320 [2024-11-18 08:09:36.158870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.320 [2024-11-18 08:09:36.159079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.320 [2024-11-18 08:09:36.159097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.320 [2024-11-18 08:09:36.159110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.320 [2024-11-18 08:09:36.159121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.320 [2024-11-18 08:09:36.171184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.320 [2024-11-18 08:09:36.171550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.320 [2024-11-18 08:09:36.171593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.320 [2024-11-18 08:09:36.171609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.320 [2024-11-18 08:09:36.171859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.320 [2024-11-18 08:09:36.172067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.320 [2024-11-18 08:09:36.172085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.320 [2024-11-18 08:09:36.172097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.320 [2024-11-18 08:09:36.172109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.320 [2024-11-18 08:09:36.184270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.320 [2024-11-18 08:09:36.184637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.320 [2024-11-18 08:09:36.184680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.320 [2024-11-18 08:09:36.184695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.320 [2024-11-18 08:09:36.184937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.320 [2024-11-18 08:09:36.185129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.320 [2024-11-18 08:09:36.185148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.320 [2024-11-18 08:09:36.185160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.320 [2024-11-18 08:09:36.185171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.320 [2024-11-18 08:09:36.197312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.320 [2024-11-18 08:09:36.197683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.320 [2024-11-18 08:09:36.197726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.320 [2024-11-18 08:09:36.197742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.320 [2024-11-18 08:09:36.197994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.320 [2024-11-18 08:09:36.198201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.320 [2024-11-18 08:09:36.198219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.320 [2024-11-18 08:09:36.198236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.320 [2024-11-18 08:09:36.198249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.320 [2024-11-18 08:09:36.210392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.320 [2024-11-18 08:09:36.210798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.320 [2024-11-18 08:09:36.210840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.320 [2024-11-18 08:09:36.210855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.320 [2024-11-18 08:09:36.211083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.320 [2024-11-18 08:09:36.211291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.320 [2024-11-18 08:09:36.211310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.320 [2024-11-18 08:09:36.211322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.320 [2024-11-18 08:09:36.211333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.320 [2024-11-18 08:09:36.223418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.320 [2024-11-18 08:09:36.223834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.320 [2024-11-18 08:09:36.223875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.320 [2024-11-18 08:09:36.223891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.320 [2024-11-18 08:09:36.224125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.320 [2024-11-18 08:09:36.224333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.320 [2024-11-18 08:09:36.224351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.320 [2024-11-18 08:09:36.224363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.320 [2024-11-18 08:09:36.224374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.320 [2024-11-18 08:09:36.236541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.320 [2024-11-18 08:09:36.236944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.320 [2024-11-18 08:09:36.236986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.320 [2024-11-18 08:09:36.237002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.320 [2024-11-18 08:09:36.237253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.320 [2024-11-18 08:09:36.237461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.320 [2024-11-18 08:09:36.237479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.320 [2024-11-18 08:09:36.237515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.320 [2024-11-18 08:09:36.237531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.320 [2024-11-18 08:09:36.249680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.320 [2024-11-18 08:09:36.250171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.320 [2024-11-18 08:09:36.250212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.320 [2024-11-18 08:09:36.250229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.320 [2024-11-18 08:09:36.250479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.320 [2024-11-18 08:09:36.250716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.320 [2024-11-18 08:09:36.250736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.320 [2024-11-18 08:09:36.250748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.320 [2024-11-18 08:09:36.250760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.320 [2024-11-18 08:09:36.262774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.320 [2024-11-18 08:09:36.263137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.320 [2024-11-18 08:09:36.263163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.320 [2024-11-18 08:09:36.263179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.320 [2024-11-18 08:09:36.263414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.320 [2024-11-18 08:09:36.263650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.320 [2024-11-18 08:09:36.263671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.320 [2024-11-18 08:09:36.263683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.320 [2024-11-18 08:09:36.263695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.320 [2024-11-18 08:09:36.275780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.320 [2024-11-18 08:09:36.276146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.320 [2024-11-18 08:09:36.276190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.320 [2024-11-18 08:09:36.276206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.320 [2024-11-18 08:09:36.276457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.320 [2024-11-18 08:09:36.276703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.320 [2024-11-18 08:09:36.276725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.320 [2024-11-18 08:09:36.276738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.320 [2024-11-18 08:09:36.276750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.320 [2024-11-18 08:09:36.288876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.320 [2024-11-18 08:09:36.289236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.320 [2024-11-18 08:09:36.289268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.320 [2024-11-18 08:09:36.289285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.320 [2024-11-18 08:09:36.289528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.320 [2024-11-18 08:09:36.289748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.320 [2024-11-18 08:09:36.289768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.320 [2024-11-18 08:09:36.289780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.320 [2024-11-18 08:09:36.289807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.320 [2024-11-18 08:09:36.302005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.320 [2024-11-18 08:09:36.302370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.320 [2024-11-18 08:09:36.302397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.320 [2024-11-18 08:09:36.302413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.320 [2024-11-18 08:09:36.302663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.320 [2024-11-18 08:09:36.302900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.320 [2024-11-18 08:09:36.302919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.320 [2024-11-18 08:09:36.302931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.320 [2024-11-18 08:09:36.302943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.320 [2024-11-18 08:09:36.315099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.320 [2024-11-18 08:09:36.315476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.320 [2024-11-18 08:09:36.315511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.320 [2024-11-18 08:09:36.315528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.320 [2024-11-18 08:09:36.315741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.320 [2024-11-18 08:09:36.316012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.320 [2024-11-18 08:09:36.316032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.320 [2024-11-18 08:09:36.316046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.320 [2024-11-18 08:09:36.316058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.320 [2024-11-18 08:09:36.328517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.320 [2024-11-18 08:09:36.328904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.320 [2024-11-18 08:09:36.328946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.320 [2024-11-18 08:09:36.328962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.320 [2024-11-18 08:09:36.329214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.320 [2024-11-18 08:09:36.329412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.320 [2024-11-18 08:09:36.329431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.320 [2024-11-18 08:09:36.329443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.320 [2024-11-18 08:09:36.329455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.320 [2024-11-18 08:09:36.341744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.320 [2024-11-18 08:09:36.342122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.320 [2024-11-18 08:09:36.342150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.320 [2024-11-18 08:09:36.342167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.320 [2024-11-18 08:09:36.342408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.320 [2024-11-18 08:09:36.342648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.320 [2024-11-18 08:09:36.342668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.320 [2024-11-18 08:09:36.342681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.320 [2024-11-18 08:09:36.342693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.320 [2024-11-18 08:09:36.354751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.321 [2024-11-18 08:09:36.355161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.321 [2024-11-18 08:09:36.355201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.321 [2024-11-18 08:09:36.355218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.321 [2024-11-18 08:09:36.355450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.321 [2024-11-18 08:09:36.355689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.321 [2024-11-18 08:09:36.355710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.321 [2024-11-18 08:09:36.355723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.321 [2024-11-18 08:09:36.355735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.321 [2024-11-18 08:09:36.367982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.321 [2024-11-18 08:09:36.368410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.321 [2024-11-18 08:09:36.368452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.321 [2024-11-18 08:09:36.368469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.321 [2024-11-18 08:09:36.368719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.321 [2024-11-18 08:09:36.368932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.321 [2024-11-18 08:09:36.368951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.321 [2024-11-18 08:09:36.368968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.321 [2024-11-18 08:09:36.368980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.321 [2024-11-18 08:09:36.381253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.321 [2024-11-18 08:09:36.381662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.321 [2024-11-18 08:09:36.381705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.321 [2024-11-18 08:09:36.381721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.321 [2024-11-18 08:09:36.381986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.321 [2024-11-18 08:09:36.382178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.321 [2024-11-18 08:09:36.382196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.321 [2024-11-18 08:09:36.382208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.321 [2024-11-18 08:09:36.382220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.321 [2024-11-18 08:09:36.394546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.321 [2024-11-18 08:09:36.394996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.321 [2024-11-18 08:09:36.395048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.321 [2024-11-18 08:09:36.395064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.321 [2024-11-18 08:09:36.395331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.321 [2024-11-18 08:09:36.395557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.321 [2024-11-18 08:09:36.395597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.321 [2024-11-18 08:09:36.395610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.321 [2024-11-18 08:09:36.395623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.582 [2024-11-18 08:09:36.408107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.582 [2024-11-18 08:09:36.408601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.582 [2024-11-18 08:09:36.408630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.582 [2024-11-18 08:09:36.408646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.582 [2024-11-18 08:09:36.408888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.582 [2024-11-18 08:09:36.409081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.582 [2024-11-18 08:09:36.409099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.582 [2024-11-18 08:09:36.409111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.582 [2024-11-18 08:09:36.409123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.582 [2024-11-18 08:09:36.421332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.582 [2024-11-18 08:09:36.421776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.582 [2024-11-18 08:09:36.421839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.582 [2024-11-18 08:09:36.421855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.582 [2024-11-18 08:09:36.422089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.582 [2024-11-18 08:09:36.422296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.582 [2024-11-18 08:09:36.422315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.582 [2024-11-18 08:09:36.422327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.582 [2024-11-18 08:09:36.422338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.582 [2024-11-18 08:09:36.434430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.582 [2024-11-18 08:09:36.434807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.582 [2024-11-18 08:09:36.434869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.582 [2024-11-18 08:09:36.434885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.582 [2024-11-18 08:09:36.435123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.582 [2024-11-18 08:09:36.435337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.582 [2024-11-18 08:09:36.435356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.582 [2024-11-18 08:09:36.435368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.582 [2024-11-18 08:09:36.435380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.582 [2024-11-18 08:09:36.447599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.582 [2024-11-18 08:09:36.448036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.582 [2024-11-18 08:09:36.448065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.582 [2024-11-18 08:09:36.448081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.582 [2024-11-18 08:09:36.448323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.582 [2024-11-18 08:09:36.448550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.582 [2024-11-18 08:09:36.448570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.582 [2024-11-18 08:09:36.448584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.582 [2024-11-18 08:09:36.448596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.582 [2024-11-18 08:09:36.460810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.582 [2024-11-18 08:09:36.461193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.582 [2024-11-18 08:09:36.461224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.582 [2024-11-18 08:09:36.461241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.582 [2024-11-18 08:09:36.461462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.582 [2024-11-18 08:09:36.461715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.582 [2024-11-18 08:09:36.461738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.582 [2024-11-18 08:09:36.461751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.582 [2024-11-18 08:09:36.461764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.582 [2024-11-18 08:09:36.474038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.582 [2024-11-18 08:09:36.474463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.582 [2024-11-18 08:09:36.474511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.582 [2024-11-18 08:09:36.474530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.582 [2024-11-18 08:09:36.474769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.582 [2024-11-18 08:09:36.474977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.582 [2024-11-18 08:09:36.474996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.582 [2024-11-18 08:09:36.475008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.582 [2024-11-18 08:09:36.475020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.582 [2024-11-18 08:09:36.487198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.582 [2024-11-18 08:09:36.487585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.582 [2024-11-18 08:09:36.487613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.582 [2024-11-18 08:09:36.487628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.582 [2024-11-18 08:09:36.487852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.582 [2024-11-18 08:09:36.488061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.582 [2024-11-18 08:09:36.488080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.582 [2024-11-18 08:09:36.488092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.582 [2024-11-18 08:09:36.488104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.582 [2024-11-18 08:09:36.500388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.582 [2024-11-18 08:09:36.500780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.582 [2024-11-18 08:09:36.500807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.582 [2024-11-18 08:09:36.500822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.582 [2024-11-18 08:09:36.501055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.582 [2024-11-18 08:09:36.501249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.582 [2024-11-18 08:09:36.501268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.582 [2024-11-18 08:09:36.501280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.582 [2024-11-18 08:09:36.501291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.582 [2024-11-18 08:09:36.513513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.582 [2024-11-18 08:09:36.514004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.582 [2024-11-18 08:09:36.514030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.582 [2024-11-18 08:09:36.514062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.582 [2024-11-18 08:09:36.514312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.583 [2024-11-18 08:09:36.514546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.583 [2024-11-18 08:09:36.514569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.583 [2024-11-18 08:09:36.514582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.583 [2024-11-18 08:09:36.514594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.583 [2024-11-18 08:09:36.526641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.583 [2024-11-18 08:09:36.526994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.583 [2024-11-18 08:09:36.527021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.583 [2024-11-18 08:09:36.527037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.583 [2024-11-18 08:09:36.527252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.583 [2024-11-18 08:09:36.527460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.583 [2024-11-18 08:09:36.527479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.583 [2024-11-18 08:09:36.527515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.583 [2024-11-18 08:09:36.527540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.583 [2024-11-18 08:09:36.539726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.583 [2024-11-18 08:09:36.540116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.583 [2024-11-18 08:09:36.540142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.583 [2024-11-18 08:09:36.540157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.583 [2024-11-18 08:09:36.540371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.583 [2024-11-18 08:09:36.540624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.583 [2024-11-18 08:09:36.540645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.583 [2024-11-18 08:09:36.540663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.583 [2024-11-18 08:09:36.540676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.583 [2024-11-18 08:09:36.552781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.583 [2024-11-18 08:09:36.553207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.583 [2024-11-18 08:09:36.553248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.583 [2024-11-18 08:09:36.553266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.583 [2024-11-18 08:09:36.553514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.583 [2024-11-18 08:09:36.553712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.583 [2024-11-18 08:09:36.553732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.583 [2024-11-18 08:09:36.553744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.583 [2024-11-18 08:09:36.553756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.583 [2024-11-18 08:09:36.565861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.583 [2024-11-18 08:09:36.566230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.583 [2024-11-18 08:09:36.566257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.583 [2024-11-18 08:09:36.566272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.583 [2024-11-18 08:09:36.566529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.583 [2024-11-18 08:09:36.566756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.583 [2024-11-18 08:09:36.566776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.583 [2024-11-18 08:09:36.566790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.583 [2024-11-18 08:09:36.566818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.583 [2024-11-18 08:09:36.579279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.583 [2024-11-18 08:09:36.579692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.583 [2024-11-18 08:09:36.579721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.583 [2024-11-18 08:09:36.579738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.583 [2024-11-18 08:09:36.579979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.583 [2024-11-18 08:09:36.580172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.583 [2024-11-18 08:09:36.580191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.583 [2024-11-18 08:09:36.580203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.583 [2024-11-18 08:09:36.580215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.583 [2024-11-18 08:09:36.592430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.583 [2024-11-18 08:09:36.592867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.583 [2024-11-18 08:09:36.592909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.583 [2024-11-18 08:09:36.592926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.583 [2024-11-18 08:09:36.593166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.583 [2024-11-18 08:09:36.593359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.583 [2024-11-18 08:09:36.593377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.583 [2024-11-18 08:09:36.593389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.583 [2024-11-18 08:09:36.593400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.583 [2024-11-18 08:09:36.605534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.583 [2024-11-18 08:09:36.605896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.583 [2024-11-18 08:09:36.605923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.583 [2024-11-18 08:09:36.605938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.583 [2024-11-18 08:09:36.606172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.583 [2024-11-18 08:09:36.606381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.583 [2024-11-18 08:09:36.606400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.583 [2024-11-18 08:09:36.606412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.583 [2024-11-18 08:09:36.606423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.583 [2024-11-18 08:09:36.618588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.583 [2024-11-18 08:09:36.618886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.583 [2024-11-18 08:09:36.618947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.583 [2024-11-18 08:09:36.618990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.583 [2024-11-18 08:09:36.619204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.583 [2024-11-18 08:09:36.619411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.583 [2024-11-18 08:09:36.619430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.583 [2024-11-18 08:09:36.619442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.583 [2024-11-18 08:09:36.619453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.583 [2024-11-18 08:09:36.631670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.583 [2024-11-18 08:09:36.632008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.583 [2024-11-18 08:09:36.632078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.583 [2024-11-18 08:09:36.632095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.583 [2024-11-18 08:09:36.632321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.583 [2024-11-18 08:09:36.632538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.583 [2024-11-18 08:09:36.632558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.583 [2024-11-18 08:09:36.632570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.583 [2024-11-18 08:09:36.632582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.583 [2024-11-18 08:09:36.644737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.583 [2024-11-18 08:09:36.645247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.583 [2024-11-18 08:09:36.645297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.583 [2024-11-18 08:09:36.645313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.584 [2024-11-18 08:09:36.645567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.584 [2024-11-18 08:09:36.645771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.584 [2024-11-18 08:09:36.645790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.584 [2024-11-18 08:09:36.645820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.584 [2024-11-18 08:09:36.645832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.584 [2024-11-18 08:09:36.658322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.584 [2024-11-18 08:09:36.658676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.584 [2024-11-18 08:09:36.658705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.584 [2024-11-18 08:09:36.658722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.584 [2024-11-18 08:09:36.658962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.584 [2024-11-18 08:09:36.659155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.584 [2024-11-18 08:09:36.659173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.584 [2024-11-18 08:09:36.659185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.584 [2024-11-18 08:09:36.659196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.846 [2024-11-18 08:09:36.671821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.846 [2024-11-18 08:09:36.672240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-11-18 08:09:36.672276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.846 [2024-11-18 08:09:36.672308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.847 [2024-11-18 08:09:36.672579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.847 [2024-11-18 08:09:36.672797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.847 [2024-11-18 08:09:36.672819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.847 [2024-11-18 08:09:36.672845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.847 [2024-11-18 08:09:36.672857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.847 5679.25 IOPS, 22.18 MiB/s [2024-11-18T07:09:36.935Z] [2024-11-18 08:09:36.685938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.847 [2024-11-18 08:09:36.686354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-11-18 08:09:36.686407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.847 [2024-11-18 08:09:36.686423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.847 [2024-11-18 08:09:36.686675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.847 [2024-11-18 08:09:36.686908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.847 [2024-11-18 08:09:36.686927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.847 [2024-11-18 08:09:36.686938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.847 [2024-11-18 08:09:36.686950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.847 [2024-11-18 08:09:36.699179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.847 [2024-11-18 08:09:36.699521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-11-18 08:09:36.699549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.847 [2024-11-18 08:09:36.699566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.847 [2024-11-18 08:09:36.699779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.847 [2024-11-18 08:09:36.700008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.847 [2024-11-18 08:09:36.700027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.847 [2024-11-18 08:09:36.700039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.847 [2024-11-18 08:09:36.700050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.847 [2024-11-18 08:09:36.712380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.847 [2024-11-18 08:09:36.712773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-11-18 08:09:36.712815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.847 [2024-11-18 08:09:36.712832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.847 [2024-11-18 08:09:36.713083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.847 [2024-11-18 08:09:36.713290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.847 [2024-11-18 08:09:36.713314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.847 [2024-11-18 08:09:36.713326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.847 [2024-11-18 08:09:36.713338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.847 [2024-11-18 08:09:36.725382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.847 [2024-11-18 08:09:36.725806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-11-18 08:09:36.725834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.847 [2024-11-18 08:09:36.725866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.847 [2024-11-18 08:09:36.726102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.847 [2024-11-18 08:09:36.726311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.847 [2024-11-18 08:09:36.726330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.847 [2024-11-18 08:09:36.726342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.847 [2024-11-18 08:09:36.726353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.847 [2024-11-18 08:09:36.738408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.847 [2024-11-18 08:09:36.738810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-11-18 08:09:36.738857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.847 [2024-11-18 08:09:36.738873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.847 [2024-11-18 08:09:36.739141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.847 [2024-11-18 08:09:36.739335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.847 [2024-11-18 08:09:36.739353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.847 [2024-11-18 08:09:36.739365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.847 [2024-11-18 08:09:36.739376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.847 [2024-11-18 08:09:36.751581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.847 [2024-11-18 08:09:36.752018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-11-18 08:09:36.752046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.847 [2024-11-18 08:09:36.752077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.847 [2024-11-18 08:09:36.752315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.847 [2024-11-18 08:09:36.752550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.847 [2024-11-18 08:09:36.752570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.847 [2024-11-18 08:09:36.752583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.847 [2024-11-18 08:09:36.752595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.847 [2024-11-18 08:09:36.764607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.847 [2024-11-18 08:09:36.765100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-11-18 08:09:36.765142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.847 [2024-11-18 08:09:36.765159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.847 [2024-11-18 08:09:36.765408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.847 [2024-11-18 08:09:36.765643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.847 [2024-11-18 08:09:36.765662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.847 [2024-11-18 08:09:36.765675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.847 [2024-11-18 08:09:36.765687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.847 [2024-11-18 08:09:36.777728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.847 [2024-11-18 08:09:36.778133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-11-18 08:09:36.778175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.847 [2024-11-18 08:09:36.778191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.847 [2024-11-18 08:09:36.778444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.847 [2024-11-18 08:09:36.778684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.847 [2024-11-18 08:09:36.778704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.847 [2024-11-18 08:09:36.778717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.847 [2024-11-18 08:09:36.778729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.847 [2024-11-18 08:09:36.790735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.847 [2024-11-18 08:09:36.791163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-11-18 08:09:36.791205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.847 [2024-11-18 08:09:36.791222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.847 [2024-11-18 08:09:36.791460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.847 [2024-11-18 08:09:36.791705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.847 [2024-11-18 08:09:36.791727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.847 [2024-11-18 08:09:36.791740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.847 [2024-11-18 08:09:36.791752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.847 [2024-11-18 08:09:36.803731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.848 [2024-11-18 08:09:36.804092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-11-18 08:09:36.804139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.848 [2024-11-18 08:09:36.804156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.848 [2024-11-18 08:09:36.804390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.848 [2024-11-18 08:09:36.804597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.848 [2024-11-18 08:09:36.804617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.848 [2024-11-18 08:09:36.804630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.848 [2024-11-18 08:09:36.804641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.848 [2024-11-18 08:09:36.817059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.848 [2024-11-18 08:09:36.817589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-11-18 08:09:36.817618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.848 [2024-11-18 08:09:36.817635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.848 [2024-11-18 08:09:36.817849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.848 [2024-11-18 08:09:36.818101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.848 [2024-11-18 08:09:36.818121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.848 [2024-11-18 08:09:36.818134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.848 [2024-11-18 08:09:36.818146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.848 [2024-11-18 08:09:36.830158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.848 [2024-11-18 08:09:36.830543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-11-18 08:09:36.830571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.848 [2024-11-18 08:09:36.830587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.848 [2024-11-18 08:09:36.830806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.848 [2024-11-18 08:09:36.831016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.848 [2024-11-18 08:09:36.831034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.848 [2024-11-18 08:09:36.831046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.848 [2024-11-18 08:09:36.831058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.848 [2024-11-18 08:09:36.843253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.848 [2024-11-18 08:09:36.843687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-11-18 08:09:36.843716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.848 [2024-11-18 08:09:36.843732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.848 [2024-11-18 08:09:36.843977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.848 [2024-11-18 08:09:36.844171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.848 [2024-11-18 08:09:36.844189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.848 [2024-11-18 08:09:36.844201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.848 [2024-11-18 08:09:36.844212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.848 [2024-11-18 08:09:36.856585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.848 [2024-11-18 08:09:36.856990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-11-18 08:09:36.857018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.848 [2024-11-18 08:09:36.857034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.848 [2024-11-18 08:09:36.857274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.848 [2024-11-18 08:09:36.857497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.848 [2024-11-18 08:09:36.857533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.848 [2024-11-18 08:09:36.857546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.848 [2024-11-18 08:09:36.857558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.848 [2024-11-18 08:09:36.870021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.848 [2024-11-18 08:09:36.870361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-11-18 08:09:36.870389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.848 [2024-11-18 08:09:36.870406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.848 [2024-11-18 08:09:36.870630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.848 [2024-11-18 08:09:36.870884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.848 [2024-11-18 08:09:36.870904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.848 [2024-11-18 08:09:36.870917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.848 [2024-11-18 08:09:36.870929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.848 [2024-11-18 08:09:36.883333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.848 [2024-11-18 08:09:36.883705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-11-18 08:09:36.883734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.848 [2024-11-18 08:09:36.883751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.848 [2024-11-18 08:09:36.883980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.848 [2024-11-18 08:09:36.884195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.848 [2024-11-18 08:09:36.884218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.848 [2024-11-18 08:09:36.884231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.848 [2024-11-18 08:09:36.884243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.848 [2024-11-18 08:09:36.896741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.848 [2024-11-18 08:09:36.897087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-11-18 08:09:36.897115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.848 [2024-11-18 08:09:36.897132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.848 [2024-11-18 08:09:36.897360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.848 [2024-11-18 08:09:36.897628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.848 [2024-11-18 08:09:36.897650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.848 [2024-11-18 08:09:36.897664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.848 [2024-11-18 08:09:36.897677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.848 [2024-11-18 08:09:36.909881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.848 [2024-11-18 08:09:36.910256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-11-18 08:09:36.910284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.848 [2024-11-18 08:09:36.910300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.848 [2024-11-18 08:09:36.910537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.848 [2024-11-18 08:09:36.910748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.848 [2024-11-18 08:09:36.910768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.848 [2024-11-18 08:09:36.910796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.848 [2024-11-18 08:09:36.910809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:43.848 [2024-11-18 08:09:36.923152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.848 [2024-11-18 08:09:36.923523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-11-18 08:09:36.923552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:43.848 [2024-11-18 08:09:36.923569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:43.848 [2024-11-18 08:09:36.923798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:43.848 [2024-11-18 08:09:36.924014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:43.848 [2024-11-18 08:09:36.924033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:43.849 [2024-11-18 08:09:36.924046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:43.849 [2024-11-18 08:09:36.924057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.111 [2024-11-18 08:09:36.936464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.111 [2024-11-18 08:09:36.936908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.111 [2024-11-18 08:09:36.936936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.111 [2024-11-18 08:09:36.936952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.111 [2024-11-18 08:09:36.937180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.111 [2024-11-18 08:09:36.937395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.111 [2024-11-18 08:09:36.937414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.111 [2024-11-18 08:09:36.937426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.111 [2024-11-18 08:09:36.937438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.111 [2024-11-18 08:09:36.949774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.111 [2024-11-18 08:09:36.950223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.111 [2024-11-18 08:09:36.950251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.111 [2024-11-18 08:09:36.950267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.111 [2024-11-18 08:09:36.950517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.111 [2024-11-18 08:09:36.950743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.111 [2024-11-18 08:09:36.950763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.111 [2024-11-18 08:09:36.950777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.111 [2024-11-18 08:09:36.950791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.111 [2024-11-18 08:09:36.962970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.111 [2024-11-18 08:09:36.963384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.111 [2024-11-18 08:09:36.963425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.111 [2024-11-18 08:09:36.963442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.111 [2024-11-18 08:09:36.963680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.111 [2024-11-18 08:09:36.963916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.111 [2024-11-18 08:09:36.963935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.111 [2024-11-18 08:09:36.963948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.111 [2024-11-18 08:09:36.963959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.111 [2024-11-18 08:09:36.976254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.111 [2024-11-18 08:09:36.976625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.111 [2024-11-18 08:09:36.976658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.111 [2024-11-18 08:09:36.976675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.111 [2024-11-18 08:09:36.976929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.111 [2024-11-18 08:09:36.977128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.111 [2024-11-18 08:09:36.977147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.111 [2024-11-18 08:09:36.977160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.111 [2024-11-18 08:09:36.977171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.111 [2024-11-18 08:09:36.989517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.111 [2024-11-18 08:09:36.989877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.111 [2024-11-18 08:09:36.989904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.112 [2024-11-18 08:09:36.989920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.112 [2024-11-18 08:09:36.990141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.112 [2024-11-18 08:09:36.990356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.112 [2024-11-18 08:09:36.990376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.112 [2024-11-18 08:09:36.990388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.112 [2024-11-18 08:09:36.990400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.112 [2024-11-18 08:09:37.002807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.112 [2024-11-18 08:09:37.003179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.112 [2024-11-18 08:09:37.003207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.112 [2024-11-18 08:09:37.003223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.112 [2024-11-18 08:09:37.003465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.112 [2024-11-18 08:09:37.003700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.112 [2024-11-18 08:09:37.003722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.112 [2024-11-18 08:09:37.003736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.112 [2024-11-18 08:09:37.003748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.112 [2024-11-18 08:09:37.016002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.112 [2024-11-18 08:09:37.016443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.112 [2024-11-18 08:09:37.016472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.112 [2024-11-18 08:09:37.016488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.112 [2024-11-18 08:09:37.016745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.112 [2024-11-18 08:09:37.016954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.112 [2024-11-18 08:09:37.016973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.112 [2024-11-18 08:09:37.016986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.112 [2024-11-18 08:09:37.016999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.112 [2024-11-18 08:09:37.029324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.112 [2024-11-18 08:09:37.029703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.112 [2024-11-18 08:09:37.029732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.112 [2024-11-18 08:09:37.029749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.112 [2024-11-18 08:09:37.029997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.112 [2024-11-18 08:09:37.030206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.112 [2024-11-18 08:09:37.030225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.112 [2024-11-18 08:09:37.030238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.112 [2024-11-18 08:09:37.030250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.112 [2024-11-18 08:09:37.042682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.112 [2024-11-18 08:09:37.043055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.112 [2024-11-18 08:09:37.043083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.112 [2024-11-18 08:09:37.043099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.112 [2024-11-18 08:09:37.043340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.112 [2024-11-18 08:09:37.043567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.112 [2024-11-18 08:09:37.043589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.112 [2024-11-18 08:09:37.043603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.112 [2024-11-18 08:09:37.043615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.112 [2024-11-18 08:09:37.056077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.112 [2024-11-18 08:09:37.056380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.112 [2024-11-18 08:09:37.056408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.112 [2024-11-18 08:09:37.056424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.112 [2024-11-18 08:09:37.056690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.112 [2024-11-18 08:09:37.056929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.112 [2024-11-18 08:09:37.056960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.112 [2024-11-18 08:09:37.056975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.112 [2024-11-18 08:09:37.056988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.112 [2024-11-18 08:09:37.069513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.112 [2024-11-18 08:09:37.069909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.112 [2024-11-18 08:09:37.069939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.112 [2024-11-18 08:09:37.069956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.112 [2024-11-18 08:09:37.070196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.112 [2024-11-18 08:09:37.070426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.112 [2024-11-18 08:09:37.070448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.112 [2024-11-18 08:09:37.070463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.112 [2024-11-18 08:09:37.070502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.112 [2024-11-18 08:09:37.082864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.112 [2024-11-18 08:09:37.083278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.112 [2024-11-18 08:09:37.083307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.112 [2024-11-18 08:09:37.083324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.112 [2024-11-18 08:09:37.083564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.112 [2024-11-18 08:09:37.083784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.112 [2024-11-18 08:09:37.083805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.112 [2024-11-18 08:09:37.083817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.112 [2024-11-18 08:09:37.083844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.112 [2024-11-18 08:09:37.096075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.112 [2024-11-18 08:09:37.096453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.112 [2024-11-18 08:09:37.096481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.112 [2024-11-18 08:09:37.096521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.112 [2024-11-18 08:09:37.096758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.112 [2024-11-18 08:09:37.096986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.112 [2024-11-18 08:09:37.097006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.112 [2024-11-18 08:09:37.097019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.112 [2024-11-18 08:09:37.097032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.112 [2024-11-18 08:09:37.109306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.112 [2024-11-18 08:09:37.109666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.112 [2024-11-18 08:09:37.109695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.112 [2024-11-18 08:09:37.109712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.112 [2024-11-18 08:09:37.109955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.112 [2024-11-18 08:09:37.110149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.112 [2024-11-18 08:09:37.110169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.112 [2024-11-18 08:09:37.110181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.112 [2024-11-18 08:09:37.110194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.112 [2024-11-18 08:09:37.122666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.112 [2024-11-18 08:09:37.123116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.113 [2024-11-18 08:09:37.123145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.113 [2024-11-18 08:09:37.123162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.113 [2024-11-18 08:09:37.123402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.113 [2024-11-18 08:09:37.123663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.113 [2024-11-18 08:09:37.123686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.113 [2024-11-18 08:09:37.123701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.113 [2024-11-18 08:09:37.123713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.113 [2024-11-18 08:09:37.136147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.113 [2024-11-18 08:09:37.136509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.113 [2024-11-18 08:09:37.136538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.113 [2024-11-18 08:09:37.136555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.113 [2024-11-18 08:09:37.136769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.113 [2024-11-18 08:09:37.136984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.113 [2024-11-18 08:09:37.137005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.113 [2024-11-18 08:09:37.137018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.113 [2024-11-18 08:09:37.137032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.113 [2024-11-18 08:09:37.149460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.113 [2024-11-18 08:09:37.149827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.113 [2024-11-18 08:09:37.149862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.113 [2024-11-18 08:09:37.149880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.113 [2024-11-18 08:09:37.150110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.113 [2024-11-18 08:09:37.150327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.113 [2024-11-18 08:09:37.150348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.113 [2024-11-18 08:09:37.150361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.113 [2024-11-18 08:09:37.150374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.113 [2024-11-18 08:09:37.162795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.113 [2024-11-18 08:09:37.163238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.113 [2024-11-18 08:09:37.163267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.113 [2024-11-18 08:09:37.163284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.113 [2024-11-18 08:09:37.163529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.113 [2024-11-18 08:09:37.163750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.113 [2024-11-18 08:09:37.163771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.113 [2024-11-18 08:09:37.163784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.113 [2024-11-18 08:09:37.163796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.113 [2024-11-18 08:09:37.176060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.113 [2024-11-18 08:09:37.176478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.113 [2024-11-18 08:09:37.176515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.113 [2024-11-18 08:09:37.176533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.113 [2024-11-18 08:09:37.176765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.113 [2024-11-18 08:09:37.176977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.113 [2024-11-18 08:09:37.176999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.113 [2024-11-18 08:09:37.177012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.113 [2024-11-18 08:09:37.177024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.113 [2024-11-18 08:09:37.189349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.113 [2024-11-18 08:09:37.189724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.113 [2024-11-18 08:09:37.189753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.113 [2024-11-18 08:09:37.189770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.113 [2024-11-18 08:09:37.190014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.113 [2024-11-18 08:09:37.190224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.113 [2024-11-18 08:09:37.190245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.113 [2024-11-18 08:09:37.190258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.113 [2024-11-18 08:09:37.190270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.375 [2024-11-18 08:09:37.202677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.375 [2024-11-18 08:09:37.203077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.375 [2024-11-18 08:09:37.203107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.375 [2024-11-18 08:09:37.203124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.375 [2024-11-18 08:09:37.203347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.375 [2024-11-18 08:09:37.203622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.375 [2024-11-18 08:09:37.203646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.375 [2024-11-18 08:09:37.203660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.375 [2024-11-18 08:09:37.203675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.375 [2024-11-18 08:09:37.215913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.375 [2024-11-18 08:09:37.216266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.375 [2024-11-18 08:09:37.216295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.375 [2024-11-18 08:09:37.216312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.375 [2024-11-18 08:09:37.216564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.375 [2024-11-18 08:09:37.216770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.375 [2024-11-18 08:09:37.216806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.375 [2024-11-18 08:09:37.216821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.375 [2024-11-18 08:09:37.216833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.375 [2024-11-18 08:09:37.229128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.375 [2024-11-18 08:09:37.229480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.375 [2024-11-18 08:09:37.229517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.376 [2024-11-18 08:09:37.229534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.376 [2024-11-18 08:09:37.229775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.376 [2024-11-18 08:09:37.229985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.376 [2024-11-18 08:09:37.230011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.376 [2024-11-18 08:09:37.230025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.376 [2024-11-18 08:09:37.230037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.376 [2024-11-18 08:09:37.242367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.376 [2024-11-18 08:09:37.242720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.376 [2024-11-18 08:09:37.242750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.376 [2024-11-18 08:09:37.242767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.376 [2024-11-18 08:09:37.242994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.376 [2024-11-18 08:09:37.243208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.376 [2024-11-18 08:09:37.243230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.376 [2024-11-18 08:09:37.243244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.376 [2024-11-18 08:09:37.243257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.376 [2024-11-18 08:09:37.255685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.376 [2024-11-18 08:09:37.256055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.376 [2024-11-18 08:09:37.256083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.376 [2024-11-18 08:09:37.256100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.376 [2024-11-18 08:09:37.256338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.376 [2024-11-18 08:09:37.256591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.376 [2024-11-18 08:09:37.256614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.376 [2024-11-18 08:09:37.256629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.376 [2024-11-18 08:09:37.256643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.376 [2024-11-18 08:09:37.268945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.376 [2024-11-18 08:09:37.269274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.376 [2024-11-18 08:09:37.269303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.376 [2024-11-18 08:09:37.269320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.376 [2024-11-18 08:09:37.269553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.376 [2024-11-18 08:09:37.269758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.376 [2024-11-18 08:09:37.269780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.376 [2024-11-18 08:09:37.269807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.376 [2024-11-18 08:09:37.269821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.376 [2024-11-18 08:09:37.282163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.376 [2024-11-18 08:09:37.282520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.376 [2024-11-18 08:09:37.282550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.376 [2024-11-18 08:09:37.282567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.376 [2024-11-18 08:09:37.282809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.376 [2024-11-18 08:09:37.283002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.376 [2024-11-18 08:09:37.283023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.376 [2024-11-18 08:09:37.283035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.376 [2024-11-18 08:09:37.283048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.376 [2024-11-18 08:09:37.295416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.376 [2024-11-18 08:09:37.295796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.376 [2024-11-18 08:09:37.295824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.376 [2024-11-18 08:09:37.295841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.376 [2024-11-18 08:09:37.296072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.376 [2024-11-18 08:09:37.296266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.376 [2024-11-18 08:09:37.296286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.376 [2024-11-18 08:09:37.296299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.376 [2024-11-18 08:09:37.296312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.376 [2024-11-18 08:09:37.308556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.376 [2024-11-18 08:09:37.308930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.376 [2024-11-18 08:09:37.308957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.376 [2024-11-18 08:09:37.308973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.376 [2024-11-18 08:09:37.309191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.376 [2024-11-18 08:09:37.309401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.376 [2024-11-18 08:09:37.309420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.376 [2024-11-18 08:09:37.309434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.376 [2024-11-18 08:09:37.309446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.376 [2024-11-18 08:09:37.321729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.376 [2024-11-18 08:09:37.322119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.376 [2024-11-18 08:09:37.322151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.376 [2024-11-18 08:09:37.322169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.376 [2024-11-18 08:09:37.322403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.376 [2024-11-18 08:09:37.322654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.376 [2024-11-18 08:09:37.322678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.376 [2024-11-18 08:09:37.322693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.376 [2024-11-18 08:09:37.322708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.376 [2024-11-18 08:09:37.335093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.376 [2024-11-18 08:09:37.335513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.376 [2024-11-18 08:09:37.335542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.376 [2024-11-18 08:09:37.335559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.376 [2024-11-18 08:09:37.335801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.376 [2024-11-18 08:09:37.336010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.376 [2024-11-18 08:09:37.336030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.376 [2024-11-18 08:09:37.336043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.376 [2024-11-18 08:09:37.336055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.376 [2024-11-18 08:09:37.348282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.376 [2024-11-18 08:09:37.348679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.376 [2024-11-18 08:09:37.348709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.376 [2024-11-18 08:09:37.348726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.376 [2024-11-18 08:09:37.348969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.376 [2024-11-18 08:09:37.349178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.376 [2024-11-18 08:09:37.349200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.376 [2024-11-18 08:09:37.349213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.376 [2024-11-18 08:09:37.349226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.376 [2024-11-18 08:09:37.361485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.376 [2024-11-18 08:09:37.361881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.377 [2024-11-18 08:09:37.361910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.377 [2024-11-18 08:09:37.361927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.377 [2024-11-18 08:09:37.362172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.377 [2024-11-18 08:09:37.362365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.377 [2024-11-18 08:09:37.362386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.377 [2024-11-18 08:09:37.362399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.377 [2024-11-18 08:09:37.362411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.377 [2024-11-18 08:09:37.374758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.377 [2024-11-18 08:09:37.375134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.377 [2024-11-18 08:09:37.375162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.377 [2024-11-18 08:09:37.375179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.377 [2024-11-18 08:09:37.375413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.377 [2024-11-18 08:09:37.375655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.377 [2024-11-18 08:09:37.375678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.377 [2024-11-18 08:09:37.375692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.377 [2024-11-18 08:09:37.375706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.377 [2024-11-18 08:09:37.388012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.377 [2024-11-18 08:09:37.388364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.377 [2024-11-18 08:09:37.388393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.377 [2024-11-18 08:09:37.388410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.377 [2024-11-18 08:09:37.388679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.377 [2024-11-18 08:09:37.388910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.377 [2024-11-18 08:09:37.388931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.377 [2024-11-18 08:09:37.388945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.377 [2024-11-18 08:09:37.388958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.377 [2024-11-18 08:09:37.401397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.377 [2024-11-18 08:09:37.401802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.377 [2024-11-18 08:09:37.401846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.377 [2024-11-18 08:09:37.401862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.377 [2024-11-18 08:09:37.402105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.377 [2024-11-18 08:09:37.402337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.377 [2024-11-18 08:09:37.402358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.377 [2024-11-18 08:09:37.402376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.377 [2024-11-18 08:09:37.402389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.377 [2024-11-18 08:09:37.414722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.377 [2024-11-18 08:09:37.415089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.377 [2024-11-18 08:09:37.415118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.377 [2024-11-18 08:09:37.415135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.377 [2024-11-18 08:09:37.415375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.377 [2024-11-18 08:09:37.415632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.377 [2024-11-18 08:09:37.415654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.377 [2024-11-18 08:09:37.415669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.377 [2024-11-18 08:09:37.415681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.377 [2024-11-18 08:09:37.428057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.377 [2024-11-18 08:09:37.428404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.377 [2024-11-18 08:09:37.428432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.377 [2024-11-18 08:09:37.428448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.377 [2024-11-18 08:09:37.428710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.377 [2024-11-18 08:09:37.428956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.377 [2024-11-18 08:09:37.428977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.377 [2024-11-18 08:09:37.428990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.377 [2024-11-18 08:09:37.429003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.377 [2024-11-18 08:09:37.441302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.377 [2024-11-18 08:09:37.441687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.377 [2024-11-18 08:09:37.441716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.377 [2024-11-18 08:09:37.441733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.377 [2024-11-18 08:09:37.441975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.377 [2024-11-18 08:09:37.442185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.377 [2024-11-18 08:09:37.442206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.377 [2024-11-18 08:09:37.442219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.377 [2024-11-18 08:09:37.442232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.377 [2024-11-18 08:09:37.454630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.377 [2024-11-18 08:09:37.455016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.377 [2024-11-18 08:09:37.455043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.377 [2024-11-18 08:09:37.455059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.377 [2024-11-18 08:09:37.455260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.377 [2024-11-18 08:09:37.455514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.377 [2024-11-18 08:09:37.455552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.377 [2024-11-18 08:09:37.455567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.377 [2024-11-18 08:09:37.455582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.639 [2024-11-18 08:09:37.467866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.639 [2024-11-18 08:09:37.468189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.639 [2024-11-18 08:09:37.468216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.639 [2024-11-18 08:09:37.468233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.639 [2024-11-18 08:09:37.468450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.639 [2024-11-18 08:09:37.468697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.639 [2024-11-18 08:09:37.468720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.639 [2024-11-18 08:09:37.468735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.639 [2024-11-18 08:09:37.468748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.639 [2024-11-18 08:09:37.481146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.639 [2024-11-18 08:09:37.481507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.639 [2024-11-18 08:09:37.481552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.639 [2024-11-18 08:09:37.481570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.639 [2024-11-18 08:09:37.481810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.639 [2024-11-18 08:09:37.482020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.639 [2024-11-18 08:09:37.482041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.639 [2024-11-18 08:09:37.482053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.639 [2024-11-18 08:09:37.482066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.639 [2024-11-18 08:09:37.494508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.639 [2024-11-18 08:09:37.494860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.639 [2024-11-18 08:09:37.494893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.640 [2024-11-18 08:09:37.494910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.640 [2024-11-18 08:09:37.495117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.640 [2024-11-18 08:09:37.495325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.640 [2024-11-18 08:09:37.495347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.640 [2024-11-18 08:09:37.495360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.640 [2024-11-18 08:09:37.495372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.640 [2024-11-18 08:09:37.507828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.640 [2024-11-18 08:09:37.508182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.640 [2024-11-18 08:09:37.508210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.640 [2024-11-18 08:09:37.508226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.640 [2024-11-18 08:09:37.508461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.640 [2024-11-18 08:09:37.508702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.640 [2024-11-18 08:09:37.508725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.640 [2024-11-18 08:09:37.508741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.640 [2024-11-18 08:09:37.508755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.640 [2024-11-18 08:09:37.521140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.640 [2024-11-18 08:09:37.521555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.640 [2024-11-18 08:09:37.521584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.640 [2024-11-18 08:09:37.521602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.640 [2024-11-18 08:09:37.521842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.640 [2024-11-18 08:09:37.522036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.640 [2024-11-18 08:09:37.522056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.640 [2024-11-18 08:09:37.522068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.640 [2024-11-18 08:09:37.522080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.640 [2024-11-18 08:09:37.534427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.640 [2024-11-18 08:09:37.534790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.640 [2024-11-18 08:09:37.534820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.640 [2024-11-18 08:09:37.534852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.640 [2024-11-18 08:09:37.535093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.640 [2024-11-18 08:09:37.535302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.640 [2024-11-18 08:09:37.535322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.640 [2024-11-18 08:09:37.535335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.640 [2024-11-18 08:09:37.535348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.640 [2024-11-18 08:09:37.547727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.640 [2024-11-18 08:09:37.548160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.640 [2024-11-18 08:09:37.548187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.640 [2024-11-18 08:09:37.548204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.640 [2024-11-18 08:09:37.548439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.640 [2024-11-18 08:09:37.548674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.640 [2024-11-18 08:09:37.548697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.640 [2024-11-18 08:09:37.548711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.640 [2024-11-18 08:09:37.548724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.640 [2024-11-18 08:09:37.561041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.640 [2024-11-18 08:09:37.561393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.640 [2024-11-18 08:09:37.561422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.640 [2024-11-18 08:09:37.561439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.640 [2024-11-18 08:09:37.561689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.640 [2024-11-18 08:09:37.561901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.640 [2024-11-18 08:09:37.561922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.640 [2024-11-18 08:09:37.561935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.640 [2024-11-18 08:09:37.561947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.640 [2024-11-18 08:09:37.574301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.640 [2024-11-18 08:09:37.574684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.640 [2024-11-18 08:09:37.574714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.640 [2024-11-18 08:09:37.574731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.640 [2024-11-18 08:09:37.574962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.640 [2024-11-18 08:09:37.575202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.640 [2024-11-18 08:09:37.575224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.640 [2024-11-18 08:09:37.575255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.640 [2024-11-18 08:09:37.575268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.640 [2024-11-18 08:09:37.587705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.640 [2024-11-18 08:09:37.588075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.640 [2024-11-18 08:09:37.588103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.640 [2024-11-18 08:09:37.588120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.640 [2024-11-18 08:09:37.588356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.640 [2024-11-18 08:09:37.588584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.640 [2024-11-18 08:09:37.588606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.640 [2024-11-18 08:09:37.588620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.640 [2024-11-18 08:09:37.588634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.640 [2024-11-18 08:09:37.601089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.640 [2024-11-18 08:09:37.601508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.640 [2024-11-18 08:09:37.601537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.640 [2024-11-18 08:09:37.601555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.640 [2024-11-18 08:09:37.601784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.640 [2024-11-18 08:09:37.602013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.640 [2024-11-18 08:09:37.602033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.640 [2024-11-18 08:09:37.602045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.640 [2024-11-18 08:09:37.602057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.640 [2024-11-18 08:09:37.614422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.640 [2024-11-18 08:09:37.614772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.640 [2024-11-18 08:09:37.614801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.640 [2024-11-18 08:09:37.614833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.640 [2024-11-18 08:09:37.615069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.640 [2024-11-18 08:09:37.615263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.640 [2024-11-18 08:09:37.615283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.640 [2024-11-18 08:09:37.615296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.640 [2024-11-18 08:09:37.615308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.640 [2024-11-18 08:09:37.627685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.640 [2024-11-18 08:09:37.628055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.641 [2024-11-18 08:09:37.628083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.641 [2024-11-18 08:09:37.628100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.641 [2024-11-18 08:09:37.628340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.641 [2024-11-18 08:09:37.628584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.641 [2024-11-18 08:09:37.628606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.641 [2024-11-18 08:09:37.628621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.641 [2024-11-18 08:09:37.628634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.641 [2024-11-18 08:09:37.640998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.641 [2024-11-18 08:09:37.641346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.641 [2024-11-18 08:09:37.641374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.641 [2024-11-18 08:09:37.641390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.641 [2024-11-18 08:09:37.641646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.641 [2024-11-18 08:09:37.641859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.641 [2024-11-18 08:09:37.641879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.641 [2024-11-18 08:09:37.641893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.641 [2024-11-18 08:09:37.641905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.641 [2024-11-18 08:09:37.654350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.641 [2024-11-18 08:09:37.654722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.641 [2024-11-18 08:09:37.654751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.641 [2024-11-18 08:09:37.654768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.641 [2024-11-18 08:09:37.655010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.641 [2024-11-18 08:09:37.655216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.641 [2024-11-18 08:09:37.655237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.641 [2024-11-18 08:09:37.655250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.641 [2024-11-18 08:09:37.655264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.641 [2024-11-18 08:09:37.667510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.641 [2024-11-18 08:09:37.667827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.641 [2024-11-18 08:09:37.667860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.641 [2024-11-18 08:09:37.667877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.641 [2024-11-18 08:09:37.668099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.641 [2024-11-18 08:09:37.668311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.641 [2024-11-18 08:09:37.668332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.641 [2024-11-18 08:09:37.668346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.641 [2024-11-18 08:09:37.668358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.641 [2024-11-18 08:09:37.680824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.641 [2024-11-18 08:09:37.681178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.641 [2024-11-18 08:09:37.681207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.641 [2024-11-18 08:09:37.681224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.641 [2024-11-18 08:09:37.681467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.641 [2024-11-18 08:09:37.681705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.641 [2024-11-18 08:09:37.681726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.641 [2024-11-18 08:09:37.681740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.641 [2024-11-18 08:09:37.681752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.641 4543.40 IOPS, 17.75 MiB/s [2024-11-18T07:09:37.729Z] [2024-11-18 08:09:37.694070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.641 [2024-11-18 08:09:37.694390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.641 [2024-11-18 08:09:37.694418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.641 [2024-11-18 08:09:37.694434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.641 [2024-11-18 08:09:37.694687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.641 [2024-11-18 08:09:37.694933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.641 [2024-11-18 08:09:37.694954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.641 [2024-11-18 08:09:37.694968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.641 [2024-11-18 08:09:37.694980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.641 [2024-11-18 08:09:37.707272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.641 [2024-11-18 08:09:37.707612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.641 [2024-11-18 08:09:37.707642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.641 [2024-11-18 08:09:37.707659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.641 [2024-11-18 08:09:37.707908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.641 [2024-11-18 08:09:37.708103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.641 [2024-11-18 08:09:37.708123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.641 [2024-11-18 08:09:37.708136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.641 [2024-11-18 08:09:37.708148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.641 [2024-11-18 08:09:37.720572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.641 [2024-11-18 08:09:37.720946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.641 [2024-11-18 08:09:37.720976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.641 [2024-11-18 08:09:37.720992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.641 [2024-11-18 08:09:37.721233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.641 [2024-11-18 08:09:37.721443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.641 [2024-11-18 08:09:37.721463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.641 [2024-11-18 08:09:37.721500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.641 [2024-11-18 08:09:37.721515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.904 [2024-11-18 08:09:37.733887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.904 [2024-11-18 08:09:37.734279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.904 [2024-11-18 08:09:37.734308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.904 [2024-11-18 08:09:37.734325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.904 [2024-11-18 08:09:37.734563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.904 [2024-11-18 08:09:37.734779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.904 [2024-11-18 08:09:37.734801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.904 [2024-11-18 08:09:37.734830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.904 [2024-11-18 08:09:37.734842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.904 [2024-11-18 08:09:37.747050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.904 [2024-11-18 08:09:37.747403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.904 [2024-11-18 08:09:37.747433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.904 [2024-11-18 08:09:37.747450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.904 [2024-11-18 08:09:37.747689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.904 [2024-11-18 08:09:37.747919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.904 [2024-11-18 08:09:37.747946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.904 [2024-11-18 08:09:37.747959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.904 [2024-11-18 08:09:37.747972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.904 [2024-11-18 08:09:37.760217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.904 [2024-11-18 08:09:37.760591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.904 [2024-11-18 08:09:37.760620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.904 [2024-11-18 08:09:37.760637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.904 [2024-11-18 08:09:37.760858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.904 [2024-11-18 08:09:37.761065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.904 [2024-11-18 08:09:37.761086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.904 [2024-11-18 08:09:37.761099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.904 [2024-11-18 08:09:37.761113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.904 [2024-11-18 08:09:37.773499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.904 [2024-11-18 08:09:37.773926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.904 [2024-11-18 08:09:37.773955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.904 [2024-11-18 08:09:37.773972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.904 [2024-11-18 08:09:37.774214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.904 [2024-11-18 08:09:37.774422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.904 [2024-11-18 08:09:37.774443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.904 [2024-11-18 08:09:37.774456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.904 [2024-11-18 08:09:37.774483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.904 [2024-11-18 08:09:37.786696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.904 [2024-11-18 08:09:37.787153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.904 [2024-11-18 08:09:37.787183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.904 [2024-11-18 08:09:37.787201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.904 [2024-11-18 08:09:37.787441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.904 [2024-11-18 08:09:37.787682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.904 [2024-11-18 08:09:37.787705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.904 [2024-11-18 08:09:37.787718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.904 [2024-11-18 08:09:37.787731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.904 [2024-11-18 08:09:37.799958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.904 [2024-11-18 08:09:37.800341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.904 [2024-11-18 08:09:37.800370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.904 [2024-11-18 08:09:37.800386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.904 [2024-11-18 08:09:37.800654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.904 [2024-11-18 08:09:37.800886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.904 [2024-11-18 08:09:37.800907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.904 [2024-11-18 08:09:37.800920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.904 [2024-11-18 08:09:37.800932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.904 [2024-11-18 08:09:37.813133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.904 [2024-11-18 08:09:37.813550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.904 [2024-11-18 08:09:37.813580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.904 [2024-11-18 08:09:37.813597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.904 [2024-11-18 08:09:37.813840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.904 [2024-11-18 08:09:37.814049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.904 [2024-11-18 08:09:37.814070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.904 [2024-11-18 08:09:37.814084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.904 [2024-11-18 08:09:37.814097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.904 [2024-11-18 08:09:37.826415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.904 [2024-11-18 08:09:37.826835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.904 [2024-11-18 08:09:37.826865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.904 [2024-11-18 08:09:37.826882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.904 [2024-11-18 08:09:37.827124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.904 [2024-11-18 08:09:37.827324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.904 [2024-11-18 08:09:37.827345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.905 [2024-11-18 08:09:37.827359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.905 [2024-11-18 08:09:37.827373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.905 [2024-11-18 08:09:37.839878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.905 [2024-11-18 08:09:37.840233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.905 [2024-11-18 08:09:37.840267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.905 [2024-11-18 08:09:37.840284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.905 [2024-11-18 08:09:37.840535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.905 [2024-11-18 08:09:37.840746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.905 [2024-11-18 08:09:37.840768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.905 [2024-11-18 08:09:37.840783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.905 [2024-11-18 08:09:37.840810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.905 [2024-11-18 08:09:37.853162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.905 [2024-11-18 08:09:37.853641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.905 [2024-11-18 08:09:37.853671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.905 [2024-11-18 08:09:37.853688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.905 [2024-11-18 08:09:37.853929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.905 [2024-11-18 08:09:37.854137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.905 [2024-11-18 08:09:37.854159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.905 [2024-11-18 08:09:37.854172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.905 [2024-11-18 08:09:37.854185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.905 [2024-11-18 08:09:37.866468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.905 [2024-11-18 08:09:37.866882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.905 [2024-11-18 08:09:37.866911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.905 [2024-11-18 08:09:37.866928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.905 [2024-11-18 08:09:37.867169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.905 [2024-11-18 08:09:37.867362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.905 [2024-11-18 08:09:37.867383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.905 [2024-11-18 08:09:37.867396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.905 [2024-11-18 08:09:37.867408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.905 [2024-11-18 08:09:37.879737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.905 [2024-11-18 08:09:37.880170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.905 [2024-11-18 08:09:37.880198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.905 [2024-11-18 08:09:37.880214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.905 [2024-11-18 08:09:37.880458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.905 [2024-11-18 08:09:37.880688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.905 [2024-11-18 08:09:37.880711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.905 [2024-11-18 08:09:37.880725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.905 [2024-11-18 08:09:37.880739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.905 [2024-11-18 08:09:37.892933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.905 [2024-11-18 08:09:37.893379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.905 [2024-11-18 08:09:37.893409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.905 [2024-11-18 08:09:37.893426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.905 [2024-11-18 08:09:37.893678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.905 [2024-11-18 08:09:37.893893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.905 [2024-11-18 08:09:37.893914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.905 [2024-11-18 08:09:37.893927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.905 [2024-11-18 08:09:37.893939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.905 [2024-11-18 08:09:37.906329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.905 [2024-11-18 08:09:37.906698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.905 [2024-11-18 08:09:37.906728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.905 [2024-11-18 08:09:37.906745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.905 [2024-11-18 08:09:37.906988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.905 [2024-11-18 08:09:37.907188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.905 [2024-11-18 08:09:37.907210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.905 [2024-11-18 08:09:37.907223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.905 [2024-11-18 08:09:37.907236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.905 [2024-11-18 08:09:37.919657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.905 [2024-11-18 08:09:37.920029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.905 [2024-11-18 08:09:37.920058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.905 [2024-11-18 08:09:37.920076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.905 [2024-11-18 08:09:37.920315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.905 [2024-11-18 08:09:37.920565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.905 [2024-11-18 08:09:37.920594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.905 [2024-11-18 08:09:37.920609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.905 [2024-11-18 08:09:37.920623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.905 [2024-11-18 08:09:37.932935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.905 [2024-11-18 08:09:37.933291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.905 [2024-11-18 08:09:37.933320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.905 [2024-11-18 08:09:37.933336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.905 [2024-11-18 08:09:37.933591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.905 [2024-11-18 08:09:37.933811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.905 [2024-11-18 08:09:37.933833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.905 [2024-11-18 08:09:37.933860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.905 [2024-11-18 08:09:37.933874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.905 [2024-11-18 08:09:37.946177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.905 [2024-11-18 08:09:37.946558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.905 [2024-11-18 08:09:37.946587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.905 [2024-11-18 08:09:37.946605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.905 [2024-11-18 08:09:37.946832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.905 [2024-11-18 08:09:37.947041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.905 [2024-11-18 08:09:37.947061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.905 [2024-11-18 08:09:37.947073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.905 [2024-11-18 08:09:37.947086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.905 [2024-11-18 08:09:37.959483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.905 [2024-11-18 08:09:37.959901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.905 [2024-11-18 08:09:37.959930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.905 [2024-11-18 08:09:37.959947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.905 [2024-11-18 08:09:37.960188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.905 [2024-11-18 08:09:37.960397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.906 [2024-11-18 08:09:37.960419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.906 [2024-11-18 08:09:37.960432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.906 [2024-11-18 08:09:37.960444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.906 [2024-11-18 08:09:37.972774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.906 [2024-11-18 08:09:37.973141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.906 [2024-11-18 08:09:37.973170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.906 [2024-11-18 08:09:37.973186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.906 [2024-11-18 08:09:37.973422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.906 [2024-11-18 08:09:37.973664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.906 [2024-11-18 08:09:37.973687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.906 [2024-11-18 08:09:37.973701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.906 [2024-11-18 08:09:37.973714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.906 [2024-11-18 08:09:37.986088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.906 [2024-11-18 08:09:37.986433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.906 [2024-11-18 08:09:37.986462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:44.906 [2024-11-18 08:09:37.986479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:44.906 [2024-11-18 08:09:37.986747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:44.906 [2024-11-18 08:09:37.986978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.906 [2024-11-18 08:09:37.987014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.906 [2024-11-18 08:09:37.987028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.906 [2024-11-18 08:09:37.987040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.166 [2024-11-18 08:09:37.999332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.166 [2024-11-18 08:09:37.999683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.166 [2024-11-18 08:09:37.999712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.166 [2024-11-18 08:09:37.999729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.166 [2024-11-18 08:09:37.999973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.166 [2024-11-18 08:09:38.000162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.166 [2024-11-18 08:09:38.000183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.166 [2024-11-18 08:09:38.000196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.166 [2024-11-18 08:09:38.000208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.166 [2024-11-18 08:09:38.012345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.166 [2024-11-18 08:09:38.012664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.166 [2024-11-18 08:09:38.012698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.166 [2024-11-18 08:09:38.012716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.166 [2024-11-18 08:09:38.012932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.167 [2024-11-18 08:09:38.013135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.167 [2024-11-18 08:09:38.013155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.167 [2024-11-18 08:09:38.013169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.167 [2024-11-18 08:09:38.013181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.167 [2024-11-18 08:09:38.025457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.167 [2024-11-18 08:09:38.025808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.167 [2024-11-18 08:09:38.025836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.167 [2024-11-18 08:09:38.025853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.167 [2024-11-18 08:09:38.026082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.167 [2024-11-18 08:09:38.026284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.167 [2024-11-18 08:09:38.026305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.167 [2024-11-18 08:09:38.026317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.167 [2024-11-18 08:09:38.026329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.167 [2024-11-18 08:09:38.038590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.167 [2024-11-18 08:09:38.038905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.167 [2024-11-18 08:09:38.038933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.167 [2024-11-18 08:09:38.038950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.167 [2024-11-18 08:09:38.039166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.167 [2024-11-18 08:09:38.039372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.167 [2024-11-18 08:09:38.039393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.167 [2024-11-18 08:09:38.039407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.167 [2024-11-18 08:09:38.039419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.167 [2024-11-18 08:09:38.051700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.167 [2024-11-18 08:09:38.052111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.167 [2024-11-18 08:09:38.052139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.167 [2024-11-18 08:09:38.052156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.167 [2024-11-18 08:09:38.052397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.167 [2024-11-18 08:09:38.052631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.167 [2024-11-18 08:09:38.052653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.167 [2024-11-18 08:09:38.052666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.167 [2024-11-18 08:09:38.052679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.167 [2024-11-18 08:09:38.064764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.167 [2024-11-18 08:09:38.065109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.167 [2024-11-18 08:09:38.065138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.167 [2024-11-18 08:09:38.065154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.167 [2024-11-18 08:09:38.065392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.167 [2024-11-18 08:09:38.065641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.167 [2024-11-18 08:09:38.065664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.167 [2024-11-18 08:09:38.065678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.167 [2024-11-18 08:09:38.065692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.167 [2024-11-18 08:09:38.077821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.167 [2024-11-18 08:09:38.078246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.167 [2024-11-18 08:09:38.078276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.167 [2024-11-18 08:09:38.078293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.167 [2024-11-18 08:09:38.078548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.167 [2024-11-18 08:09:38.078794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.167 [2024-11-18 08:09:38.078831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.167 [2024-11-18 08:09:38.078846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.167 [2024-11-18 08:09:38.078859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.167 [2024-11-18 08:09:38.090898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.167 [2024-11-18 08:09:38.091308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.167 [2024-11-18 08:09:38.091337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.167 [2024-11-18 08:09:38.091353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.167 [2024-11-18 08:09:38.091605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.167 [2024-11-18 08:09:38.091835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.167 [2024-11-18 08:09:38.091877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.167 [2024-11-18 08:09:38.091891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.167 [2024-11-18 08:09:38.091904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.167 [2024-11-18 08:09:38.104135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.167 [2024-11-18 08:09:38.104445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.167 [2024-11-18 08:09:38.104474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.167 [2024-11-18 08:09:38.104513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.167 [2024-11-18 08:09:38.104761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.167 [2024-11-18 08:09:38.104967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.167 [2024-11-18 08:09:38.104987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.167 [2024-11-18 08:09:38.104999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.167 [2024-11-18 08:09:38.105011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.167 [2024-11-18 08:09:38.117132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.167 [2024-11-18 08:09:38.117539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.167 [2024-11-18 08:09:38.117567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.167 [2024-11-18 08:09:38.117583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.167 [2024-11-18 08:09:38.117814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.167 [2024-11-18 08:09:38.118016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.167 [2024-11-18 08:09:38.118037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.167 [2024-11-18 08:09:38.118050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.167 [2024-11-18 08:09:38.118062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.167 [2024-11-18 08:09:38.130109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.167 [2024-11-18 08:09:38.130516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.167 [2024-11-18 08:09:38.130545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.167 [2024-11-18 08:09:38.130562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.167 [2024-11-18 08:09:38.130797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.167 [2024-11-18 08:09:38.131001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.167 [2024-11-18 08:09:38.131021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.167 [2024-11-18 08:09:38.131033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.167 [2024-11-18 08:09:38.131046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.167 [2024-11-18 08:09:38.143109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.167 [2024-11-18 08:09:38.143451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.167 [2024-11-18 08:09:38.143480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.168 [2024-11-18 08:09:38.143524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.168 [2024-11-18 08:09:38.143767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.168 [2024-11-18 08:09:38.143993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.168 [2024-11-18 08:09:38.144014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.168 [2024-11-18 08:09:38.144027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.168 [2024-11-18 08:09:38.144039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.168 [2024-11-18 08:09:38.156103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.168 [2024-11-18 08:09:38.156458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.168 [2024-11-18 08:09:38.156556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.168 [2024-11-18 08:09:38.156573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.168 [2024-11-18 08:09:38.156801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.168 [2024-11-18 08:09:38.157007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.168 [2024-11-18 08:09:38.157027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.168 [2024-11-18 08:09:38.157040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.168 [2024-11-18 08:09:38.157052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.168 [2024-11-18 08:09:38.169197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.168 [2024-11-18 08:09:38.169602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.168 [2024-11-18 08:09:38.169631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.168 [2024-11-18 08:09:38.169647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.168 [2024-11-18 08:09:38.169882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.168 [2024-11-18 08:09:38.170071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.168 [2024-11-18 08:09:38.170092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.168 [2024-11-18 08:09:38.170105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.168 [2024-11-18 08:09:38.170117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.168 [2024-11-18 08:09:38.182434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.168 [2024-11-18 08:09:38.182903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.168 [2024-11-18 08:09:38.182936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.168 [2024-11-18 08:09:38.182953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.168 [2024-11-18 08:09:38.183198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.168 [2024-11-18 08:09:38.183386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.168 [2024-11-18 08:09:38.183406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.168 [2024-11-18 08:09:38.183419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.168 [2024-11-18 08:09:38.183431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.168 [2024-11-18 08:09:38.195665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.168 [2024-11-18 08:09:38.196029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.168 [2024-11-18 08:09:38.196055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.168 [2024-11-18 08:09:38.196072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.168 [2024-11-18 08:09:38.196301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.168 [2024-11-18 08:09:38.196531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.168 [2024-11-18 08:09:38.196566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.168 [2024-11-18 08:09:38.196579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.168 [2024-11-18 08:09:38.196593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.168 [2024-11-18 08:09:38.208730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.168 [2024-11-18 08:09:38.209086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.168 [2024-11-18 08:09:38.209113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.168 [2024-11-18 08:09:38.209129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.168 [2024-11-18 08:09:38.209360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.168 [2024-11-18 08:09:38.209606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.168 [2024-11-18 08:09:38.209627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.168 [2024-11-18 08:09:38.209640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.168 [2024-11-18 08:09:38.209653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.168 [2024-11-18 08:09:38.221804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.168 [2024-11-18 08:09:38.222156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.168 [2024-11-18 08:09:38.222185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.168 [2024-11-18 08:09:38.222202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.168 [2024-11-18 08:09:38.222448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.168 [2024-11-18 08:09:38.222695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.168 [2024-11-18 08:09:38.222718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.168 [2024-11-18 08:09:38.222731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.168 [2024-11-18 08:09:38.222744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.168 [2024-11-18 08:09:38.235117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.168 [2024-11-18 08:09:38.235539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.168 [2024-11-18 08:09:38.235569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.168 [2024-11-18 08:09:38.235586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.168 [2024-11-18 08:09:38.235826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.168 [2024-11-18 08:09:38.236022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.168 [2024-11-18 08:09:38.236042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.168 [2024-11-18 08:09:38.236056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.168 [2024-11-18 08:09:38.236069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.168 [2024-11-18 08:09:38.248235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.168 [2024-11-18 08:09:38.248582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.168 [2024-11-18 08:09:38.248610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.168 [2024-11-18 08:09:38.248626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.168 [2024-11-18 08:09:38.248855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.168 [2024-11-18 08:09:38.249059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.168 [2024-11-18 08:09:38.249078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.168 [2024-11-18 08:09:38.249092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.168 [2024-11-18 08:09:38.249103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.429 [2024-11-18 08:09:38.261735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.429 [2024-11-18 08:09:38.262095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-18 08:09:38.262123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.429 [2024-11-18 08:09:38.262140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.429 [2024-11-18 08:09:38.262374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.429 [2024-11-18 08:09:38.262620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.429 [2024-11-18 08:09:38.262646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.429 [2024-11-18 08:09:38.262660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.429 [2024-11-18 08:09:38.262673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.429 [2024-11-18 08:09:38.274943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.429 [2024-11-18 08:09:38.275351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-18 08:09:38.275378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.429 [2024-11-18 08:09:38.275395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.429 [2024-11-18 08:09:38.275661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.429 [2024-11-18 08:09:38.275889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.429 [2024-11-18 08:09:38.275908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.429 [2024-11-18 08:09:38.275921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.429 [2024-11-18 08:09:38.275933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 892103 Killed "${NVMF_APP[@]}" "$@" 00:35:45.429 [2024-11-18 08:09:38.288203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.429 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:45.429 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:45.429 [2024-11-18 08:09:38.288565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-18 08:09:38.288594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 wit 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:45.429 h addr=10.0.0.2, port=4420 00:35:45.429 [2024-11-18 08:09:38.288615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.429 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:45.429 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:45.429 [2024-11-18 08:09:38.288857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.429 [2024-11-18 08:09:38.289075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.429 [2024-11-18 08:09:38.289096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.429 [2024-11-18 08:09:38.289109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.429 [2024-11-18 08:09:38.289122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.429 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=893056 00:35:45.429 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:45.429 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 893056 00:35:45.429 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 893056 ']' 00:35:45.429 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.429 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:45.429 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.429 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:45.429 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:45.429 [2024-11-18 08:09:38.301629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.429 [2024-11-18 08:09:38.302078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-18 08:09:38.302135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.429 [2024-11-18 08:09:38.302155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.429 [2024-11-18 08:09:38.302385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.429 [2024-11-18 08:09:38.302648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.429 [2024-11-18 08:09:38.302681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.429 [2024-11-18 08:09:38.302696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.429 [2024-11-18 08:09:38.302710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.429 [2024-11-18 08:09:38.315102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.429 [2024-11-18 08:09:38.315420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-18 08:09:38.315447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.429 [2024-11-18 08:09:38.315463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.429 [2024-11-18 08:09:38.315700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.429 [2024-11-18 08:09:38.315933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.429 [2024-11-18 08:09:38.315954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.429 [2024-11-18 08:09:38.315967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.429 [2024-11-18 08:09:38.315979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.429 [2024-11-18 08:09:38.328556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.429 [2024-11-18 08:09:38.328963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-18 08:09:38.328993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.429 [2024-11-18 08:09:38.329010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.429 [2024-11-18 08:09:38.329262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.429 [2024-11-18 08:09:38.329527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.429 [2024-11-18 08:09:38.329550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.429 [2024-11-18 08:09:38.329572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.429 [2024-11-18 08:09:38.329587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.429 [2024-11-18 08:09:38.341010] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:45.429 [2024-11-18 08:09:38.341067] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:45.429 [2024-11-18 08:09:38.341977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.429 [2024-11-18 08:09:38.342391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-18 08:09:38.342419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.429 [2024-11-18 08:09:38.342435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.429 [2024-11-18 08:09:38.342684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.430 [2024-11-18 08:09:38.342902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.430 [2024-11-18 08:09:38.342921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.430 [2024-11-18 08:09:38.342935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.430 [2024-11-18 08:09:38.342947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.430 [2024-11-18 08:09:38.355239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.430 [2024-11-18 08:09:38.355603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-18 08:09:38.355632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.430 [2024-11-18 08:09:38.355649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.430 [2024-11-18 08:09:38.355890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.430 [2024-11-18 08:09:38.356100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.430 [2024-11-18 08:09:38.356120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.430 [2024-11-18 08:09:38.356133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.430 [2024-11-18 08:09:38.356145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.430 [2024-11-18 08:09:38.368449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.430 [2024-11-18 08:09:38.368962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-18 08:09:38.368991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.430 [2024-11-18 08:09:38.369008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.430 [2024-11-18 08:09:38.369252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.430 [2024-11-18 08:09:38.369460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.430 [2024-11-18 08:09:38.369504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.430 [2024-11-18 08:09:38.369528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.430 [2024-11-18 08:09:38.369542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.430 [2024-11-18 08:09:38.381919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.430 [2024-11-18 08:09:38.382244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-18 08:09:38.382272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.430 [2024-11-18 08:09:38.382290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.430 [2024-11-18 08:09:38.382521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.430 [2024-11-18 08:09:38.382727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.430 [2024-11-18 08:09:38.382748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.430 [2024-11-18 08:09:38.382762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.430 [2024-11-18 08:09:38.382791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.430 [2024-11-18 08:09:38.395228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.430 [2024-11-18 08:09:38.395645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-18 08:09:38.395674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.430 [2024-11-18 08:09:38.395691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.430 [2024-11-18 08:09:38.395933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.430 [2024-11-18 08:09:38.396141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.430 [2024-11-18 08:09:38.396161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.430 [2024-11-18 08:09:38.396173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.430 [2024-11-18 08:09:38.396185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.430 [2024-11-18 08:09:38.408652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.430 [2024-11-18 08:09:38.409025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-18 08:09:38.409054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.430 [2024-11-18 08:09:38.409070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.430 [2024-11-18 08:09:38.409312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.430 [2024-11-18 08:09:38.409553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.430 [2024-11-18 08:09:38.409575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.430 [2024-11-18 08:09:38.409589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.430 [2024-11-18 08:09:38.409601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.430 [2024-11-18 08:09:38.415968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:45.430 [2024-11-18 08:09:38.421889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.430 [2024-11-18 08:09:38.422343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-18 08:09:38.422375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.430 [2024-11-18 08:09:38.422393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.430 [2024-11-18 08:09:38.422647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.430 [2024-11-18 08:09:38.422862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.430 [2024-11-18 08:09:38.422883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.430 [2024-11-18 08:09:38.422897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.430 [2024-11-18 08:09:38.422911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.430 [2024-11-18 08:09:38.435163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.430 [2024-11-18 08:09:38.435734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-18 08:09:38.435773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.430 [2024-11-18 08:09:38.435794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.430 [2024-11-18 08:09:38.436056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.430 [2024-11-18 08:09:38.436255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.430 [2024-11-18 08:09:38.436276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.430 [2024-11-18 08:09:38.436292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.430 [2024-11-18 08:09:38.436307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.430 [2024-11-18 08:09:38.448602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.430 [2024-11-18 08:09:38.449044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-18 08:09:38.449073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.430 [2024-11-18 08:09:38.449091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.430 [2024-11-18 08:09:38.449336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.430 [2024-11-18 08:09:38.449578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.430 [2024-11-18 08:09:38.449602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.430 [2024-11-18 08:09:38.449617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.430 [2024-11-18 08:09:38.449646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.430 [2024-11-18 08:09:38.461446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:45.430 [2024-11-18 08:09:38.461480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:45.430 [2024-11-18 08:09:38.461508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:45.430 [2024-11-18 08:09:38.461520] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:45.430 [2024-11-18 08:09:38.461530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:45.430 [2024-11-18 08:09:38.461895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.430 [2024-11-18 08:09:38.462227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-18 08:09:38.462257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.430 [2024-11-18 08:09:38.462274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.430 [2024-11-18 08:09:38.462509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.430 [2024-11-18 08:09:38.462716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.430 [2024-11-18 08:09:38.462738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.431 [2024-11-18 08:09:38.462751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.431 [2024-11-18 08:09:38.462765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.431 [2024-11-18 08:09:38.462961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:45.431 [2024-11-18 08:09:38.463026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:45.431 [2024-11-18 08:09:38.463030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:45.431 [2024-11-18 08:09:38.475475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.431 [2024-11-18 08:09:38.476029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-18 08:09:38.476069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.431 [2024-11-18 08:09:38.476091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.431 [2024-11-18 08:09:38.476346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.431 [2024-11-18 08:09:38.476586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.431 [2024-11-18 08:09:38.476610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.431 [2024-11-18 08:09:38.476627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.431 [2024-11-18 08:09:38.476644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.431 [2024-11-18 08:09:38.489051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.431 [2024-11-18 08:09:38.489610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-18 08:09:38.489651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.431 [2024-11-18 08:09:38.489673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.431 [2024-11-18 08:09:38.489931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.431 [2024-11-18 08:09:38.490144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.431 [2024-11-18 08:09:38.490167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.431 [2024-11-18 08:09:38.490197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.431 [2024-11-18 08:09:38.490213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.431 [2024-11-18 08:09:38.502532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.431 [2024-11-18 08:09:38.503043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-18 08:09:38.503084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.431 [2024-11-18 08:09:38.503106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.431 [2024-11-18 08:09:38.503363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.431 [2024-11-18 08:09:38.503605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.431 [2024-11-18 08:09:38.503629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.431 [2024-11-18 08:09:38.503662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.431 [2024-11-18 08:09:38.503680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.431 [2024-11-18 08:09:38.516159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.693 [2024-11-18 08:09:38.516653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.693 [2024-11-18 08:09:38.516690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.693 [2024-11-18 08:09:38.516710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.693 [2024-11-18 08:09:38.516949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.693 [2024-11-18 08:09:38.517167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.693 [2024-11-18 08:09:38.517190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.693 [2024-11-18 08:09:38.517207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.693 [2024-11-18 08:09:38.517223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.693 [2024-11-18 08:09:38.529814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.693 [2024-11-18 08:09:38.530315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.693 [2024-11-18 08:09:38.530355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.693 [2024-11-18 08:09:38.530376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.693 [2024-11-18 08:09:38.530613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.693 [2024-11-18 08:09:38.530851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.693 [2024-11-18 08:09:38.530873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.693 [2024-11-18 08:09:38.530891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.693 [2024-11-18 08:09:38.530907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.693 [2024-11-18 08:09:38.543374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.693 [2024-11-18 08:09:38.543867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.693 [2024-11-18 08:09:38.543906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.693 [2024-11-18 08:09:38.543926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.693 [2024-11-18 08:09:38.544184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.693 [2024-11-18 08:09:38.544396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.693 [2024-11-18 08:09:38.544418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.693 [2024-11-18 08:09:38.544435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.693 [2024-11-18 08:09:38.544452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.693 [2024-11-18 08:09:38.557000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.693 [2024-11-18 08:09:38.557350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.693 [2024-11-18 08:09:38.557379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.693 [2024-11-18 08:09:38.557396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.693 [2024-11-18 08:09:38.557634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.693 [2024-11-18 08:09:38.557860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.693 [2024-11-18 08:09:38.557881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.693 [2024-11-18 08:09:38.557895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.693 [2024-11-18 08:09:38.557908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.693 [2024-11-18 08:09:38.570556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.693 [2024-11-18 08:09:38.570915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.693 [2024-11-18 08:09:38.570943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.693 [2024-11-18 08:09:38.570961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.693 [2024-11-18 08:09:38.571192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.693 [2024-11-18 08:09:38.571414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.693 [2024-11-18 08:09:38.571436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.693 [2024-11-18 08:09:38.571451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.693 [2024-11-18 08:09:38.571464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.693 [2024-11-18 08:09:38.583990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.693 [2024-11-18 08:09:38.584346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.693 [2024-11-18 08:09:38.584376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.693 [2024-11-18 08:09:38.584401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.693 [2024-11-18 08:09:38.584625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.693 [2024-11-18 08:09:38.584845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.693 [2024-11-18 08:09:38.584867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.693 [2024-11-18 08:09:38.584883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.693 [2024-11-18 08:09:38.584897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.693 [2024-11-18 08:09:38.597437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.693 [2024-11-18 08:09:38.597827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.693 [2024-11-18 08:09:38.597856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.693 [2024-11-18 08:09:38.597873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.693 [2024-11-18 08:09:38.598102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.693 [2024-11-18 08:09:38.598324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.694 [2024-11-18 08:09:38.598345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.694 [2024-11-18 08:09:38.598359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.694 [2024-11-18 08:09:38.598371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.694 [2024-11-18 08:09:38.611053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.694 [2024-11-18 08:09:38.611408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.694 [2024-11-18 08:09:38.611437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.694 [2024-11-18 08:09:38.611454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.694 [2024-11-18 08:09:38.611678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.694 [2024-11-18 08:09:38.611909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.694 [2024-11-18 08:09:38.611932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.694 [2024-11-18 08:09:38.611947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.694 [2024-11-18 08:09:38.611961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.694 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:45.694 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:45.694 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:45.694 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:45.694 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:45.694 [2024-11-18 08:09:38.624529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.694 [2024-11-18 08:09:38.624902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.694 [2024-11-18 08:09:38.624931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.694 [2024-11-18 08:09:38.624948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.694 [2024-11-18 08:09:38.625176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.694 [2024-11-18 08:09:38.625399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.694 [2024-11-18 08:09:38.625421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.694 [2024-11-18 08:09:38.625437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.694 [2024-11-18 08:09:38.625450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.694 [2024-11-18 08:09:38.638104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.694 [2024-11-18 08:09:38.638452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.694 [2024-11-18 08:09:38.638481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.694 [2024-11-18 08:09:38.638508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.694 [2024-11-18 08:09:38.638723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.694 [2024-11-18 08:09:38.638966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.694 [2024-11-18 08:09:38.638989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.694 [2024-11-18 08:09:38.639004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.694 [2024-11-18 08:09:38.639018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.694 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:45.694 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:45.694 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.694 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:45.694 [2024-11-18 08:09:38.650863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:45.694 [2024-11-18 08:09:38.651731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.694 [2024-11-18 08:09:38.652089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.694 [2024-11-18 08:09:38.652118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.694 [2024-11-18 08:09:38.652135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.694 [2024-11-18 08:09:38.652364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.694 [2024-11-18 08:09:38.652634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.694 [2024-11-18 08:09:38.652657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.694 [2024-11-18 08:09:38.652672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.694 [2024-11-18 08:09:38.652685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.694 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.694 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:45.694 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.694 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:45.694 [2024-11-18 08:09:38.665293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.694 [2024-11-18 08:09:38.665667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.694 [2024-11-18 08:09:38.665699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.694 [2024-11-18 08:09:38.665718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.694 [2024-11-18 08:09:38.665959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.694 [2024-11-18 08:09:38.666183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.694 [2024-11-18 08:09:38.666205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.694 [2024-11-18 08:09:38.666221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.694 [2024-11-18 08:09:38.666234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.694 [2024-11-18 08:09:38.678893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.694 [2024-11-18 08:09:38.679215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.694 [2024-11-18 08:09:38.679244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.694 [2024-11-18 08:09:38.679261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.694 [2024-11-18 08:09:38.679483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.694 [2024-11-18 08:09:38.679718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.694 [2024-11-18 08:09:38.679741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.694 [2024-11-18 08:09:38.679756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.694 [2024-11-18 08:09:38.679769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.694 3786.17 IOPS, 14.79 MiB/s [2024-11-18T07:09:38.782Z] Malloc0 00:35:45.694 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.694 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:45.694 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.694 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:45.694 [2024-11-18 08:09:38.692453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.694 [2024-11-18 08:09:38.693010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.694 [2024-11-18 08:09:38.693045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.694 [2024-11-18 08:09:38.693066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.694 [2024-11-18 08:09:38.693304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.694 [2024-11-18 08:09:38.693565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.694 [2024-11-18 08:09:38.693590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.694 [2024-11-18 08:09:38.693608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.694 [2024-11-18 08:09:38.693625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.694 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.694 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:45.694 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.694 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:45.694 [2024-11-18 08:09:38.706047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.694 [2024-11-18 08:09:38.706436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.694 [2024-11-18 08:09:38.706464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209e970 with addr=10.0.0.2, port=4420 00:35:45.694 [2024-11-18 08:09:38.706480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e970 is same with the state(6) to be set 00:35:45.694 [2024-11-18 08:09:38.706703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e970 (9): Bad file descriptor 00:35:45.694 [2024-11-18 08:09:38.706922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.695 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.695 [2024-11-18 08:09:38.706945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.695 [2024-11-18 08:09:38.706975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.695 [2024-11-18 08:09:38.706988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.695 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:45.695 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.695 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:45.695 [2024-11-18 08:09:38.710909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:45.695 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.695 08:09:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 892386 00:35:45.695 [2024-11-18 08:09:38.719458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.996 [2024-11-18 08:09:38.902448] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:35:47.890 4190.29 IOPS, 16.37 MiB/s [2024-11-18T07:09:41.912Z] 4729.88 IOPS, 18.48 MiB/s [2024-11-18T07:09:42.851Z] 5160.33 IOPS, 20.16 MiB/s [2024-11-18T07:09:43.788Z] 5506.40 IOPS, 21.51 MiB/s [2024-11-18T07:09:44.723Z] 5789.09 IOPS, 22.61 MiB/s [2024-11-18T07:09:46.103Z] 6017.08 IOPS, 23.50 MiB/s [2024-11-18T07:09:47.040Z] 6211.85 IOPS, 24.27 MiB/s [2024-11-18T07:09:47.979Z] 6390.14 IOPS, 24.96 MiB/s [2024-11-18T07:09:47.979Z] 6529.93 IOPS, 25.51 MiB/s 00:35:54.891 Latency(us) 00:35:54.891 [2024-11-18T07:09:47.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.891 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:54.891 Verification LBA range: start 0x0 length 0x4000 00:35:54.891 Nvme1n1 : 15.01 6530.62 25.51 10573.68 0.00 7460.85 579.51 20097.71 00:35:54.891 [2024-11-18T07:09:47.979Z] =================================================================================================================== 00:35:54.891 [2024-11-18T07:09:47.979Z] Total : 6530.62 25.51 10573.68 0.00 7460.85 579.51 20097.71 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:54.891 rmmod nvme_tcp 00:35:54.891 rmmod nvme_fabrics 00:35:54.891 rmmod nvme_keyring 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 893056 ']' 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 893056 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 893056 ']' 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 893056 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:54.891 08:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 893056 00:35:55.150 08:09:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:55.150 08:09:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:55.150 08:09:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 893056' 00:35:55.150 killing process with pid 893056 00:35:55.150 08:09:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 893056 00:35:55.150 08:09:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 893056 00:35:55.150 08:09:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:55.150 08:09:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:55.150 08:09:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:55.150 08:09:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:55.150 08:09:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:35:55.150 08:09:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:55.150 08:09:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:35:55.150 08:09:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:55.150 08:09:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:55.150 08:09:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:55.150 08:09:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:55.150 08:09:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:57.682 08:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:57.682 00:35:57.682 real 0m22.371s 00:35:57.682 user 0m59.634s 00:35:57.682 sys 0m4.183s 00:35:57.682 08:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:57.682 08:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:57.682 ************************************ 00:35:57.682 END TEST nvmf_bdevperf 00:35:57.682 ************************************ 00:35:57.682 08:09:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:57.682 08:09:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:57.682 08:09:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:57.682 08:09:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.682 ************************************ 00:35:57.682 START TEST nvmf_target_disconnect 00:35:57.682 ************************************ 00:35:57.682 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:57.682 * Looking for test storage... 00:35:57.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:57.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.683 --rc genhtml_branch_coverage=1 00:35:57.683 --rc genhtml_function_coverage=1 00:35:57.683 --rc genhtml_legend=1 00:35:57.683 --rc geninfo_all_blocks=1 00:35:57.683 --rc geninfo_unexecuted_blocks=1 00:35:57.683 00:35:57.683 ' 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:57.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.683 --rc genhtml_branch_coverage=1 00:35:57.683 --rc genhtml_function_coverage=1 00:35:57.683 --rc genhtml_legend=1 00:35:57.683 --rc geninfo_all_blocks=1 00:35:57.683 --rc geninfo_unexecuted_blocks=1 00:35:57.683 00:35:57.683 ' 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:57.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.683 --rc genhtml_branch_coverage=1 00:35:57.683 --rc genhtml_function_coverage=1 00:35:57.683 --rc genhtml_legend=1 00:35:57.683 --rc geninfo_all_blocks=1 00:35:57.683 --rc geninfo_unexecuted_blocks=1 00:35:57.683 00:35:57.683 ' 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:57.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.683 --rc genhtml_branch_coverage=1 00:35:57.683 --rc genhtml_function_coverage=1 00:35:57.683 --rc genhtml_legend=1 00:35:57.683 --rc geninfo_all_blocks=1 00:35:57.683 --rc geninfo_unexecuted_blocks=1 00:35:57.683 00:35:57.683 ' 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.683 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:57.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:35:57.684 08:09:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:59.587 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:59.587 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:59.587 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:59.588 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:59.588 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:59.588 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:59.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:59.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:35:59.847 00:35:59.847 --- 10.0.0.2 ping statistics --- 00:35:59.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.847 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:59.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:59.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:35:59.847 00:35:59.847 --- 10.0.0.1 ping statistics --- 00:35:59.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.847 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:59.847 ************************************ 00:35:59.847 START TEST nvmf_target_disconnect_tc1 00:35:59.847 ************************************ 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:35:59.847 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:59.848 [2024-11-18 08:09:52.834649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.848 [2024-11-18 08:09:52.834718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd8610 with addr=10.0.0.2, port=4420 00:35:59.848 [2024-11-18 08:09:52.834757] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:59.848 [2024-11-18 08:09:52.834791] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:59.848 [2024-11-18 08:09:52.834806] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:35:59.848 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:59.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:59.848 Initializing NVMe Controllers 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:59.848 00:35:59.848 real 0m0.103s 00:35:59.848 user 0m0.051s 00:35:59.848 sys 0m0.049s 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:59.848 ************************************ 00:35:59.848 END TEST nvmf_target_disconnect_tc1 00:35:59.848 ************************************ 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:59.848 ************************************ 00:35:59.848 START TEST nvmf_target_disconnect_tc2 00:35:59.848 ************************************ 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=896215 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 896215 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 896215 ']' 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:59.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:59.848 08:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.107 [2024-11-18 08:09:52.956708] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:36:00.107 [2024-11-18 08:09:52.956782] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:00.107 [2024-11-18 08:09:53.030584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:00.107 [2024-11-18 08:09:53.081030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:00.107 [2024-11-18 08:09:53.081085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:00.107 [2024-11-18 08:09:53.081114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:00.107 [2024-11-18 08:09:53.081126] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:00.107 [2024-11-18 08:09:53.081136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:00.107 [2024-11-18 08:09:53.082693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:00.107 [2024-11-18 08:09:53.082745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:00.107 [2024-11-18 08:09:53.082792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:36:00.107 [2024-11-18 08:09:53.082795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.368 Malloc0 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.368 [2024-11-18 08:09:53.272306] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.368 [2024-11-18 08:09:53.300611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=896242 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:00.368 08:09:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:02.282 08:09:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 896215 00:36:02.282 08:09:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Write completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Write completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Write completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Write completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Write completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Write completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 [2024-11-18 08:09:55.326253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Write completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Write completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Write completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Write completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Write completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Write completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Write completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Write completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Write completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Write completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Write completed with error (sct=0, sc=8) 00:36:02.282 starting I/O failed 00:36:02.282 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Write completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 [2024-11-18 08:09:55.326568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Write completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Write completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Write completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Write completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Write completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 [2024-11-18 08:09:55.326855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Write completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Write completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Write completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Write completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Write completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Write completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Write completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Write completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Write completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Write completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Write completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Write completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Write completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Write completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Read completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 Write completed with error (sct=0, sc=8) 00:36:02.283 starting I/O failed 00:36:02.283 [2024-11-18 08:09:55.327133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:02.283 [2024-11-18 08:09:55.327349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.283 [2024-11-18 08:09:55.327399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.283 qpair failed and we were unable to recover it. 00:36:02.283 [2024-11-18 08:09:55.327516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.283 [2024-11-18 08:09:55.327544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.283 qpair failed and we were unable to recover it. 00:36:02.283 [2024-11-18 08:09:55.327648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.283 [2024-11-18 08:09:55.327676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.283 qpair failed and we were unable to recover it. 00:36:02.283 [2024-11-18 08:09:55.327784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.283 [2024-11-18 08:09:55.327810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.283 qpair failed and we were unable to recover it. 00:36:02.283 [2024-11-18 08:09:55.327935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.283 [2024-11-18 08:09:55.327961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.283 qpair failed and we were unable to recover it. 00:36:02.283 [2024-11-18 08:09:55.328038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.283 [2024-11-18 08:09:55.328064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.283 qpair failed and we were unable to recover it. 00:36:02.283 [2024-11-18 08:09:55.328181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.283 [2024-11-18 08:09:55.328209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.283 qpair failed and we were unable to recover it. 00:36:02.283 [2024-11-18 08:09:55.328321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.283 [2024-11-18 08:09:55.328347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.283 qpair failed and we were unable to recover it. 00:36:02.283 [2024-11-18 08:09:55.328504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.283 [2024-11-18 08:09:55.328531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.283 qpair failed and we were unable to recover it. 00:36:02.283 [2024-11-18 08:09:55.328629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.283 [2024-11-18 08:09:55.328667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.283 qpair failed and we were unable to recover it. 00:36:02.283 [2024-11-18 08:09:55.328773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.283 [2024-11-18 08:09:55.328820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.283 qpair failed and we were unable to recover it. 00:36:02.283 [2024-11-18 08:09:55.328923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.283 [2024-11-18 08:09:55.328952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.283 qpair failed and we were unable to recover it. 00:36:02.283 [2024-11-18 08:09:55.329069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.283 [2024-11-18 08:09:55.329096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.283 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.329183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.329216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.329307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.329334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.329410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.329437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.329547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.329577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.329697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.329724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.329816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.329842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.329958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.329984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.330110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.330136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.330226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.330252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.330404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.330430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.330530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.330557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.330673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.330698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.330823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.330848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.330966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.330992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.331097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.331137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.331246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.331273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.331370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.331410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.331508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.331536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.331627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.331653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.331735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.331761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.331875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.331902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.332024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.332049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.332163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.332189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.332317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.332342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.332426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.332452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.332546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.332573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.332655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.332680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.332770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.332800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.332943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.332970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.333087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.333114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.333204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.333231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.333317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.333342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.333420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.333446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.333550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.333577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.333673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.333702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.333812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.333839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.333923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.333950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.284 qpair failed and we were unable to recover it. 00:36:02.284 [2024-11-18 08:09:55.334069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.284 [2024-11-18 08:09:55.334095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.334217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.334244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.334332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.334359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.334445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.334471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.334608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.334633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.334749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.334774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.334915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.334940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.335026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.335051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.335137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.335165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.335281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.335307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.335419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.335445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.335571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.335599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.335689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.335715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.335824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.335850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.335989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.336015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.336103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.336131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.336233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.336273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.336418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.336446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.336540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.336566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.336694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.336719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.336799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.336824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.336936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.336962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.337084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.337112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.337230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.337259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.337373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.337400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.337480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.337516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.337607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.337633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.337730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.337756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.337875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.337901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.337993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.338021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.338108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.338140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.338226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.338253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.338369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.338395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.338482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.338515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.338599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.338625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.338710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.338737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.338824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.338851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.338970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.285 [2024-11-18 08:09:55.338996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.285 qpair failed and we were unable to recover it. 00:36:02.285 [2024-11-18 08:09:55.339082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.339107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.339245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.339271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.339380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.339407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.339524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.339552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.339640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.339666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.339753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.339781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.339899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.339925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.340065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.340092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.340203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.340230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.340318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.340344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.340429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.340458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.340580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.340607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.340716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.340741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.340821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.340847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.340964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.340989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.341074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.341100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.341184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.341210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.341320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.341345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.341504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.341532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.341621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.341652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.341735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.341761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.341872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.341898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.341981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.342006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.342112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.342138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.342251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.342276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.342380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.342406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.342544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.342583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.342707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.342735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.342852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.342877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.342973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.342998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.343112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.343138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.343250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.343276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.343386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.343412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.343517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.343557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.343662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.343701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.343854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.343883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.343971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.343997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.344118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-11-18 08:09:55.344145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.286 qpair failed and we were unable to recover it. 00:36:02.286 [2024-11-18 08:09:55.344235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.344262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.344458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.344484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.344574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.344599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.344786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.344811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.344897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.344922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.345045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.345070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.345177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.345202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.345346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.345371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.345461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.345502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.345648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.345675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.345814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.345840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.345963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.345989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.346078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.346105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.346186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.346213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.346304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.346330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.346411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.346437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.346535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.346575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.346663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.346689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.346780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.346805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.346895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.346921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.347031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.347056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.347143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.347169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.347279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.347305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.347419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.347444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.347532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.347558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.347669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.347694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.347784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.347814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.347912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.347939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.348019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.348047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.348133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.348159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.348275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-11-18 08:09:55.348302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.287 qpair failed and we were unable to recover it. 00:36:02.287 [2024-11-18 08:09:55.348384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.348410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.348548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.348575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.348656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.348682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.348765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.348793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.348943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.348970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.349080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.349106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.349243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.349269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.349380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.349406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.349504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.349531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.349646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.349672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.349760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.349792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.349874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.349901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.350011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.350039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.350128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.350156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.350266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.350291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.350432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.350457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.350582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.350608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.350690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.350719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.350862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.350888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.351004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.351029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.351172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.351197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.351305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.351330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.351448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.351474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.351571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.351596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.351691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.351720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.351864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.351890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.352006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.352032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.352173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.352199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.352323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.352361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.352502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.352541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.352746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.352773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.352872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.352897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.353040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.353065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.353179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.353204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.353291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.353319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.353408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.353435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.353552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.288 [2024-11-18 08:09:55.353579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.288 qpair failed and we were unable to recover it. 00:36:02.288 [2024-11-18 08:09:55.353665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.353692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.353777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.353804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.353915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.353941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.354020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.354046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.354122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.354147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.354247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.354287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.354379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.354406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.354507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.354540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.354623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.354650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.354771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.354800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.354940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.354967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.355120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.355147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.355237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.355266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.355394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.355432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.355586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.355613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.355698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.355724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.355821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.355848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.355964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.355990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.356080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.356106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.356196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.356224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.356370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.356397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.356546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.356574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.356689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.356716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.356866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.356892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.357034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.357060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.357147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.357173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.357309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.357335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.357431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.357459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.357577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.357615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.357718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.357757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.357876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.357904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.357996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.358022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.358134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.358160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.358303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.358331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.358497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.358555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.358681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.358709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.358865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.358892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.358986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.289 [2024-11-18 08:09:55.359012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.289 qpair failed and we were unable to recover it. 00:36:02.289 [2024-11-18 08:09:55.359123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.359149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.359239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.359265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.359372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.359398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.359552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.359591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.359709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.359737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.359832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.359858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.359934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.359960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.360046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.360074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.360164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.360191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.360303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.360334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.360435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.360475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.360607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.360636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.360750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.360777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.360894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.360920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.360993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.361019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.361106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.361132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.361220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.361248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.361366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.361393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.361518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.361545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.361655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.361681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.361767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.361793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.361911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.361940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.362038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.362065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.362187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.362214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.362303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.362329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.362424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.362463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.362572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.362602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.362690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.362716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.362870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.362896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.363008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.363035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.363180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.363211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.363329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.363355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.363444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.363470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.363569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.363595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.363736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.363762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.363901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.363928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.364018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.364049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.290 [2024-11-18 08:09:55.364184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.290 [2024-11-18 08:09:55.364212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.290 qpair failed and we were unable to recover it. 00:36:02.291 [2024-11-18 08:09:55.364352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.291 [2024-11-18 08:09:55.364378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.291 qpair failed and we were unable to recover it. 00:36:02.291 [2024-11-18 08:09:55.364461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.291 [2024-11-18 08:09:55.364488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.291 qpair failed and we were unable to recover it. 00:36:02.291 [2024-11-18 08:09:55.364607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.291 [2024-11-18 08:09:55.364633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.291 qpair failed and we were unable to recover it. 00:36:02.291 [2024-11-18 08:09:55.364709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.291 [2024-11-18 08:09:55.364734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.291 qpair failed and we were unable to recover it. 00:36:02.291 [2024-11-18 08:09:55.364857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.291 [2024-11-18 08:09:55.364883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.291 qpair failed and we were unable to recover it. 00:36:02.291 [2024-11-18 08:09:55.364966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.291 [2024-11-18 08:09:55.364993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.291 qpair failed and we were unable to recover it. 00:36:02.291 [2024-11-18 08:09:55.365104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.291 [2024-11-18 08:09:55.365131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.291 qpair failed and we were unable to recover it. 00:36:02.291 [2024-11-18 08:09:55.365273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.291 [2024-11-18 08:09:55.365299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.291 qpair failed and we were unable to recover it. 00:36:02.291 [2024-11-18 08:09:55.365410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.291 [2024-11-18 08:09:55.365435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.291 qpair failed and we were unable to recover it. 00:36:02.291 [2024-11-18 08:09:55.365526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.291 [2024-11-18 08:09:55.365554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.291 qpair failed and we were unable to recover it. 00:36:02.291 [2024-11-18 08:09:55.365637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.291 [2024-11-18 08:09:55.365664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.291 qpair failed and we were unable to recover it. 00:36:02.291 [2024-11-18 08:09:55.365784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.291 [2024-11-18 08:09:55.365821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.291 qpair failed and we were unable to recover it. 00:36:02.291 [2024-11-18 08:09:55.365966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.291 [2024-11-18 08:09:55.365993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.291 qpair failed and we were unable to recover it. 00:36:02.291 [2024-11-18 08:09:55.366082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.291 [2024-11-18 08:09:55.366111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.291 qpair failed and we were unable to recover it. 00:36:02.291 [2024-11-18 08:09:55.366255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.291 [2024-11-18 08:09:55.366282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.576 qpair failed and we were unable to recover it. 00:36:02.576 [2024-11-18 08:09:55.366397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.576 [2024-11-18 08:09:55.366424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.576 qpair failed and we were unable to recover it. 00:36:02.576 [2024-11-18 08:09:55.366577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.576 [2024-11-18 08:09:55.366603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.576 qpair failed and we were unable to recover it. 00:36:02.576 [2024-11-18 08:09:55.366690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.576 [2024-11-18 08:09:55.366716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.576 qpair failed and we were unable to recover it. 00:36:02.576 [2024-11-18 08:09:55.366813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.576 [2024-11-18 08:09:55.366839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.576 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.366951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.366979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.367092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.367118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.367233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.367260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.367398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.367424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.367552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.367578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.367673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.367699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.367821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.367847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.367961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.367986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.368077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.368103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.368194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.368222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.368350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.368389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.368522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.368551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.368640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.368668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.368755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.368782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.368924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.368951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.369067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.369095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.369238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.369266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.369379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.369404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.369478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.369513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.369630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.369661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.369746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.369772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.369856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.369883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.370002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.370028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.370177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.370217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.370309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.370337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.370452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.370481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.370588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.370614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.370727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.370754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.370846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.370872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.370984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.371010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.371147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.371174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.371290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.371316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.371431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.371457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.371596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.371622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.371731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.371757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.371869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.371895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.577 [2024-11-18 08:09:55.371986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.577 [2024-11-18 08:09:55.372025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.577 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.372141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.372168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.372285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.372312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.372395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.372421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.372540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.372567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.372685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.372711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.372809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.372835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.372951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.372977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.373090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.373116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.373234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.373261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.373347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.373373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.373514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.373540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.373631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.373657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.373750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.373780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.373898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.373926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.374010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.374037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.374126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.374153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.374271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.374298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.374407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.374434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.374528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.374555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.374681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.374720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.374879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.374906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.374993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.375020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.375208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.375240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.375359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.375387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.375478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.375517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.375604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.375631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.375721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.375749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.375897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.375924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.376045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.376072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.376184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.376210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.376322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.376350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.376445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.376473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.376606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.376634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.376775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.376804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.376941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.376967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.377077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.377103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.377192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.377218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.578 qpair failed and we were unable to recover it. 00:36:02.578 [2024-11-18 08:09:55.377341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.578 [2024-11-18 08:09:55.377367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.377466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.377514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.377614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.377653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.377814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.377841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.377931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.377958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.378072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.378099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.378188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.378215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.378332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.378359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.378487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.378535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.378683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.378712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.378815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.378842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.378950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.378976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.379101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.379127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.379236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.379261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.379365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.379391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.379470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.379509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.379592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.379618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.379738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.379763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.379870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.379895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.379980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.380006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.380100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.380128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.380264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.380303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.380423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.380450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.380542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.380571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.380691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.380717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.380813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.380839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.380943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.380969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.381096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.381124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.381251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.381277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.381422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.381449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.381570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.381597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.381712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.381738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.381849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.381875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.382004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.382031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.382170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.382196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.382314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.382342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.382433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.382459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.382565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.579 [2024-11-18 08:09:55.382591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.579 qpair failed and we were unable to recover it. 00:36:02.579 [2024-11-18 08:09:55.382708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.382733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.382852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.382878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.382974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.383000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.383089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.383115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.383222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.383247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.383344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.383383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.383536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.383564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.383658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.383696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.383809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.383835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.383915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.383941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.384025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.384052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.384163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.384190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.384312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.384338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.384453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.384482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.384582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.384614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.384698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.384725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.384820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.384847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.384938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.384965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.385046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.385071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.385197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.385236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.385370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.385398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.385521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.385551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.385672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.385701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.385815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.385841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.385921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.385947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.386034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.386060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.386153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.386181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.386294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.386320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.386403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.386429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.386525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.386551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.386661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.386688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.386775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.386801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.386919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.386945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.387057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.387083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.387199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.387226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.387372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.387398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.387525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.387565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.580 [2024-11-18 08:09:55.387655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.580 [2024-11-18 08:09:55.387684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.580 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.387802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.387829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.387911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.387937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.388053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.388079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.388218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.388256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.388352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.388381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.388512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.388541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.388633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.388659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.388750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.388776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.388887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.388912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.388999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.389025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.389226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.389254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.389370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.389397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.389508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.389536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.389625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.389651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.389735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.389762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.389845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.389872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.390013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.390039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.390133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.390159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.390264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.390290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.390406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.390434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.390521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.390553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.390669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.390697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.390810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.390836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.390928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.390954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.391093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.391119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.391206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.391233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.391332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.391371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.391462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.391496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.391590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.391616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.391704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.391730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.581 [2024-11-18 08:09:55.391824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.581 [2024-11-18 08:09:55.391849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.581 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.391937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.391966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.392048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.392074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.392191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.392217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.392326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.392352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.392467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.392500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.392591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.392618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.392709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.392736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.392878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.392904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.392994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.393021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.393104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.393131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.393241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.393266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.393462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.393487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.393582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.393614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.393722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.393748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.393853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.393879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.393987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.394013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.394093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.394118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.394202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.394229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.394341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.394368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.394447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.394474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.394611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.394650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.394746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.394774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.394885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.394912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.395024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.395050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.395143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.395169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.395243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.395268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.395386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.395412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.395526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.395554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.395640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.395665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.395777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.395802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.395884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.395910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.395999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.396024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.396136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.396162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.396236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.396262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.396343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.396368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.396450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.396475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.396561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.396587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.582 [2024-11-18 08:09:55.396743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.582 [2024-11-18 08:09:55.396781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.582 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.396902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.396930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.397022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.397055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.397172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.397198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.397289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.397318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.397406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.397432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.397546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.397574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.397685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.397711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.397852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.397877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.397959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.397985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.398102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.398130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.398221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.398247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.398353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.398379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.398456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.398482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.398579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.398606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.398714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.398740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.398896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.398922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.399007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.399033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.399120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.399146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.399254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.399280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.399361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.399388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.399522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.399550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.399629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.399655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.399734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.399761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.399877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.399902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.400022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.400050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.400164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.400190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.400276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.400304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.400414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.400440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.400531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.400557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.400672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.400698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.400773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.400799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.400892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.400919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.401031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.401058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.401146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.401174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.401264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.401289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.401373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.401398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.401478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.401512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.583 [2024-11-18 08:09:55.401587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.583 [2024-11-18 08:09:55.401612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.583 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.401721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.401746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.401856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.401881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.401956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.401981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.402061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.402091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.402207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.402233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.402322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.402362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.402486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.402520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.402640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.402669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.402790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.402817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.402895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.402922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.403012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.403038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.403153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.403179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.403270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.403296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.403378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.403403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.403483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.403513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.403627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.403652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.403725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.403751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.403839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.403864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.403981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.404006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.404091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.404116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.404238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.404265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.404351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.404378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.404459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.404485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.404585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.404611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.404704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.404731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.404878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.404904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.405019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.405045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.405134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.405160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.405249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.405276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.405385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.405411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.405502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.405533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.405647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.405673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.405792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.405818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.405897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.405923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.406001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.406027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.406169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.406197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.406309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.406334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.584 [2024-11-18 08:09:55.406424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.584 [2024-11-18 08:09:55.406463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.584 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.406589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.406617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.406746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.406784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.406875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.406905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.407016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.407043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.407132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.407158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.407247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.407273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.407363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.407391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.407526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.407565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.407659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.407687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.407775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.407801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.407876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.407902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.407977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.408003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.408116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.408142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.408224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.408249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.408368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.408395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.408520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.408549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.408695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.408722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.408808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.408834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.408946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.408973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.409085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.409114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.409226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.409252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.409378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.409417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.409543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.409570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.409646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.409672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.409769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.409796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.409920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.409948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.410028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.410054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.410168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.410194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.410313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.410339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.410483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.410516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.410636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.410664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.410759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.410786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.410869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.410901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.411019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.411047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.411165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.411192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.411274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.411300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.411380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.411405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.585 qpair failed and we were unable to recover it. 00:36:02.585 [2024-11-18 08:09:55.411516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.585 [2024-11-18 08:09:55.411542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.411648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.411674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.411757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.411785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.411944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.411970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.412112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.412138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.412226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.412252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.412390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.412416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.412533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.412560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.412672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.412698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.412793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.412818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.412908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.412934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.413020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.413045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.413151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.413175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.413315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.413340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.413415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.413440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.413529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.413554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.413644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.413669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.413752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.413778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.413889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.413913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.414003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.414029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.414116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.414142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.414247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.414273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.414380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.414410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.414497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.414523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.414636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.414662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.414745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.414770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.414848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.414873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.414979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.415004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.415077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.415102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.415203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.415243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.415346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.415386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.415497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.415536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.415632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.415660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.415769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.586 [2024-11-18 08:09:55.415795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.586 qpair failed and we were unable to recover it. 00:36:02.586 [2024-11-18 08:09:55.415911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.415938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.416027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.416053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.416172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.416201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.416320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.416349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.416441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.416468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.416555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.416582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.416666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.416692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.416816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.416842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.416926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.416952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.417042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.417071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.417182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.417208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.417289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.417316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.417404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.417431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.417579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.417605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.417719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.417747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.417856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.417888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.418026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.418051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.418164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.418191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.418277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.418305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.418450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.418480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.418612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.418640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.418753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.418779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.418871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.418897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.419036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.419062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.419176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.419202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.419321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.419348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.419427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.419453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.419540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.419567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.419673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.419699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.419791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.419816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.419899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.419925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.420037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.420063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.420144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.420169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.420287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.420315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.420403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.420429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.420531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.420558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.420640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.420666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.420748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.587 [2024-11-18 08:09:55.420774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.587 qpair failed and we were unable to recover it. 00:36:02.587 [2024-11-18 08:09:55.420913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.420939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.421018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.421044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.421160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.421188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.421302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.421328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.421453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.421480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.421573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.421599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.421737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.421762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.421845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.421871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.421953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.421979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.422104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.422144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.422240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.422269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.422388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.422416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.422507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.422535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.422645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.422671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.422792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.422818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.422934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.422961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.423102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.423128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.423218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.423249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.423367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.423394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.423479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.423512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.423623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.423649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.423789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.423815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.423961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.423987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.424097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.424123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.424234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.424259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.424345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.424372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.424454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.424479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.424582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.424610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.424694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.424720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.424870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.424896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.425011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.425037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.425180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.425206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.425288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.425314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.425425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.425451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.425581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.425608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.425692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.425717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.425830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.425856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.588 qpair failed and we were unable to recover it. 00:36:02.588 [2024-11-18 08:09:55.425937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.588 [2024-11-18 08:09:55.425963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.426045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.426071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.426144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.426170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.426252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.426278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.426353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.426378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.426523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.426550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.426634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.426660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.426748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.426777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.426897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.426925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.427036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.427062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.427172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.427198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.427305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.427331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.427456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.427504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.427591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.427618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.427709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.427735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.427856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.427881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.427986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.428011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.428128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.428154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.428241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.428268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.428363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.428391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.428513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.428543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.428635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.428660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.428769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.428799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.428938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.428964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.429049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.429074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.429161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.429188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.429304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.429330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.429437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.429463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.429588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.429614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.429710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.429736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.429885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.429910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.430059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.430085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.430164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.430190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.430327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.430353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.430472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.430511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.430605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.430631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.430752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.430791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.430883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.430909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.589 qpair failed and we were unable to recover it. 00:36:02.589 [2024-11-18 08:09:55.431045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.589 [2024-11-18 08:09:55.431070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.431156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.431182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.431265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.431291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.431375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.431401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.431507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.431533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.431727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.431753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.431884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.431924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.432011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.432039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.432149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.432174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.432286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.432318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.432429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.432456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.432586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.432612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.432726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.432751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.432831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.432857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.432968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.432993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.433075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.433103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.433218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.433244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.433323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.433349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.433469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.433516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.433628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.433654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.433732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.433758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.433898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.433923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.434041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.434071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.434235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.434274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.434369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.434396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.434519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.434546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.434626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.434653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.434764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.434796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.434937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.434963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.435081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.435109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.435230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.435260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.435350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.435376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.435486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.435518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.435630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.590 [2024-11-18 08:09:55.435656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.590 qpair failed and we were unable to recover it. 00:36:02.590 [2024-11-18 08:09:55.435736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.435762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.435848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.435875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.435960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.435988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.436071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.436100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.436187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.436213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.436327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.436353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.436447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.436475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.436578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.436605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.436687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.436712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.436792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.436817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.436929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.436954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.437069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.437097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.437210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.437236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.437315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.437342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.437483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.437515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.437602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.437628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.437718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.437744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.437890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.437916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.438063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.438088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.438168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.438194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.438311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.438337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.438470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.438528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.438652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.438680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.438785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.438812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.438931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.438958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.439043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.439070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.439187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.439214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.439356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.439382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.439509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.439549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.439650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.439678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.439798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.439824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.439923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.439951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.440034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.440061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.440189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.440228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.440372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.440400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.440525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.440551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.440662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.440688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.591 [2024-11-18 08:09:55.440773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.591 [2024-11-18 08:09:55.440803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.591 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.440943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.440969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.441080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.441108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.441193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.441222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.441333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.441359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.441447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.441504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.441624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.441650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.441735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.441761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.441878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.441904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.441991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.442019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.442101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.442126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.442238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.442264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.442385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.442410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.442551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.442577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.442663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.442691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.442806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.442832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.442943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.442969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.443078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.443104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.443212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.443238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.443347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.443373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.443449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.443475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.443560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.443586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.443716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.443755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.443872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.443901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.444013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.444040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.444161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.444188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.444305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.444333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.444441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.444468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.444564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.444591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.444703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.444729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.444824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.444850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.444930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.444955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.445048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.445076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.445239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.445278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.445369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.445396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.445596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.445623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.445741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.445766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.445857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.445882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.592 [2024-11-18 08:09:55.445992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.592 [2024-11-18 08:09:55.446018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.592 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.446105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.446132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.446217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.446244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.446351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.446377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.446488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.446521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.446637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.446663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.446749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.446774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.446863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.446893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.446999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.447039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.447127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.447155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.447301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.447328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.447444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.447470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.447563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.447589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.447673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.447698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.447783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.447808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.447916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.447942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.448079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.448105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.448220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.448248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.448395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.448425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.448537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.448576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.448673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.448701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.448818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.448845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.448954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.448980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.449069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.449095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.449180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.449208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.449320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.449346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.449459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.449486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.449623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.449649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.449727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.449753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.449863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.449889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.450027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.450053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.450168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.450197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.450303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.450342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.450470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.450504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.450587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.450618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.450730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.450757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.450869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.450895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.451036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.451061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.593 qpair failed and we were unable to recover it. 00:36:02.593 [2024-11-18 08:09:55.451158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.593 [2024-11-18 08:09:55.451187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.451276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.451303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.451418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.451445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.451537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.451564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.451680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.451706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.451817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.451843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.451957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.451983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.452095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.452121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.452263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.452290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.452440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.452468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.452574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.452614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.452734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.452760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.452872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.452897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.453037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.453063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.453209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.453234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.453346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.453374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.453460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.453486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.453578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.453605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.453723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.453749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.453892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.453918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.454065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.454091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.454240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.454267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.454369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.454407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.454505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.454539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.454650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.454676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.454766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.454792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.454898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.454924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.455040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.455066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.455157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.455182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.455263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.455289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.455397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.455424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.455519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.455546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.455675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.455701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.455817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.455842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.455975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.456001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.456072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.456098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.456206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.456231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.456380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.456408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.594 qpair failed and we were unable to recover it. 00:36:02.594 [2024-11-18 08:09:55.456534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.594 [2024-11-18 08:09:55.456574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.456703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.456742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.456862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.456890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.457031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.457058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.457169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.457196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.457286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.457314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.457459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.457486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.457582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.457609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.457728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.457754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.457893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.457918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.458063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.458089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.458215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.458243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.458392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.458419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.458533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.458561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.458645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.458671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.458764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.458792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.458878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.458907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.459024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.459050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.459137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.459164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.459263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.459302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.459456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.459483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.459585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.459611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.459753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.459779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.459896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.459927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.460067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.460093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.460237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.460270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.460367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.460393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.460477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.460510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.460651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.460677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.460761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.460787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.460880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.460906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.460995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.461021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.461132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.461159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.461281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.595 [2024-11-18 08:09:55.461320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.595 qpair failed and we were unable to recover it. 00:36:02.595 [2024-11-18 08:09:55.461442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.461469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.461574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.461613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.461707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.461736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.461862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.461888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.462003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.462029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.462142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.462168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.462259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.462286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.462412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.462440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.462550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.462577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.462690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.462717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.462866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.462893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.462973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.462999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.463108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.463135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.463220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.463246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.463385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.463412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.463541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.463580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.463671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.463698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.463781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.463807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.463923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.463951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.464046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.464084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.464180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.464208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.464355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.464381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.464502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.464529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.464647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.464673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.464796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.464821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.464937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.464963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.465077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.465103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.465186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.465212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.465302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.465330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.465421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.465460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.465563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.465591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.465673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.465705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.465796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.465823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.465936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.465962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.466080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.466108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.466200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.466227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.466312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.466340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.596 [2024-11-18 08:09:55.466452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.596 [2024-11-18 08:09:55.466478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.596 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.466576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.466601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.466691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.466718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.466839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.466865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.466938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.466964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.467104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.467130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.467244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.467269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.467348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.467376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.467462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.467495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.467590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.467617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.467704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.467730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.467849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.467875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.467959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.467985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.468128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.468154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.468281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.468309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.468398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.468424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.468557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.468584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.468671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.468696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.468781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.468806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.468887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.468913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.468997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.469022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.469103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.469134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.469223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.469250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.469368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.469394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.469470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.469502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.469614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.469640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.469725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.469751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.469861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.469887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.469978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.470005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.470115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.470142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.470258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.470284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.470405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.470431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.470553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.470582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.470696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.470722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.470870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.470896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.471044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.471070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.471183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.471209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.471319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.471345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.597 qpair failed and we were unable to recover it. 00:36:02.597 [2024-11-18 08:09:55.471471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.597 [2024-11-18 08:09:55.471519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.471647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.471674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.471749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.471775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.471856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.471881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.472008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.472034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.472112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.472137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.472243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.472268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.472394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.472433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.472561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.472590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.472702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.472729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.472848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.472875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.472983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.473035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.473128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.473156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.473298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.473324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.473419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.473459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.473553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.473581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.473662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.473688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.473767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.473793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.473903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.473928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.474038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.474064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.474146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.474173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.474249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.474275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.474350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.474376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.474453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.474478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.474574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.474600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.474703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.474729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.474809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.474835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.474982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.475008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.475086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.475113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.475230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.475256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.475343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.475369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.475453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.475479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.475574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.475601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.475688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.475714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.475826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.475854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.475948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.475975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.476094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.476121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.476237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.598 [2024-11-18 08:09:55.476264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.598 qpair failed and we were unable to recover it. 00:36:02.598 [2024-11-18 08:09:55.476345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.476372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.476459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.476485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.476602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.476629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.476749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.476775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.476850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.476876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.476997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.477023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.477127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.477152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.477240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.477266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.477385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.477412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.477500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.477527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.477631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.477656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.477773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.477799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.477915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.477945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.478085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.478111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.478248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.478274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.478385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.478412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.478499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.478526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.478636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.478662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.478749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.478775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.478890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.478918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.479001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.479027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.479151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.479179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.479325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.479352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.479449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.479476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.479603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.479630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.479711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.479737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.479860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.479888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.480028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.480054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.480146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.480171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.480251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.480279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.480360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.480387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.480497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.480525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.480611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.480637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.599 [2024-11-18 08:09:55.480720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.599 [2024-11-18 08:09:55.480746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.599 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.480867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.480893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.481010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.481036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.481125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.481151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.481249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.481277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.481391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.481419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.481525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.481553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.481664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.481690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.481798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.481824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.481911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.481938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.482038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.482064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.482171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.482197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.482312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.482338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.482447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.482474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.482561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.482587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.482669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.482696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.482794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.482820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.482896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.482922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.483006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.483034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.483151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.483181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.483293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.483319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.483405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.483430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.483522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.483548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.483648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.483686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.483815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.483844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.483956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.483982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.484072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.484098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.484184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.484211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.484336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.484362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.484474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.484506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.484596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.484622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.484702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.484729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.484843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.484870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.484991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.485017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.485098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.485124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.485234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.485259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.485349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.485376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.485486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.600 [2024-11-18 08:09:55.485517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.600 qpair failed and we were unable to recover it. 00:36:02.600 [2024-11-18 08:09:55.485606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.485632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.485752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.485778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.485889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.485915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.486009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.486035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.486126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.486151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.486256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.486281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.486393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.486419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.486508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.486535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.486654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.486681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.486796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.486823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.486939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.486966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.487111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.487138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.487251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.487278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.487362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.487388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.487538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.487565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.487656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.487683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.487772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.487798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.487917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.487944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.488024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.488049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.488132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.488158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.488254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.488280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.488362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.488392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.488554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.488593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.488713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.488740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.488840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.488866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.489005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.489030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.489143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.489169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.489527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.489557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.489679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.489705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.489852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.489877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.490001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.490029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.490120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.490146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.490231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.490256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.490332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.490358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.490475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.490506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.490608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.490635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.490774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.490800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.490918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.601 [2024-11-18 08:09:55.490944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.601 qpair failed and we were unable to recover it. 00:36:02.601 [2024-11-18 08:09:55.491087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.491113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.491231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.491258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.491350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.491376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.491460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.491486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.491635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.491662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.491746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.491772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.491847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.491873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.491952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.491979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.492069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.492095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.492184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.492210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.492298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.492324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.492411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.492437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.492558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.492585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.492700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.492727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.492846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.492872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.492983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.493009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.493096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.493122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.493216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.493241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.493358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.493384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.493468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.493500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.493615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.493641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.493722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.493748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.493830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.493856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.493972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.494002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.494094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.494120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.494233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.494260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.494377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.494403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.494506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.494533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.494618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.494644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.494755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.494781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.494898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.494925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.495085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.495124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.495248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.495276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.495365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.495392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.495511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.495540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.495627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.495655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.495768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.495794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.602 [2024-11-18 08:09:55.495915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.602 [2024-11-18 08:09:55.495941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.602 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.496034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.496060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.496171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.496197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.496310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.496336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.496426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.496452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.496587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.496614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.496730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.496756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.496848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.496874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.496960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.496986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.497072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.497099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.497250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.497279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.497419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.497445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.497549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.497576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.497672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.497699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.497823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.497848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.497966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.497992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.498075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.498103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.498213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.498239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.498352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.498381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.498465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.498498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.498581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.498608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.498695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.498721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.498814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.498840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.498924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.498950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.499038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.499064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.499187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.499212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.499306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.499337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.499422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.499448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.499563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.499590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.499682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.499708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.499793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.499819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.499906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.499932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.500013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.500040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.500146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.500171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.500285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.500314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.500414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.500453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.500559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.500587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.500683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.500709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.603 [2024-11-18 08:09:55.500813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.603 [2024-11-18 08:09:55.500839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.603 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.500977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.501025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.501123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.501148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.501260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.501285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.501380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.501407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.501530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.501556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.501637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.501662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.501747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.501772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.501844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.501869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.501971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.501996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.502102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.502127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.502220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.502245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.502355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.502381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.502519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.502558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.502682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.502710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.502836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.502869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.502984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.503010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.503123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.503149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.503259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.503286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.503409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.503435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.503530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.503556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.503638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.503664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.503747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.503772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.503880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.503905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.504010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.504035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.504120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.504144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.504249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.504274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.504365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.504391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.504469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.504499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.504620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.504646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.504721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.504747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.504830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.504855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.504943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.504969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.505081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.505106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.604 qpair failed and we were unable to recover it. 00:36:02.604 [2024-11-18 08:09:55.505188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.604 [2024-11-18 08:09:55.505213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.505291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.505316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.505407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.505433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.505528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.505554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.505636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.505661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.505770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.505795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.505902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.505927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.506009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.506034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.506140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.506187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.506304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.506331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.506419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.506446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.506546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.506572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.506652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.506677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.506786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.506810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.506914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.506939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.507021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.507045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.507179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.507215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.507341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.507365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.507474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.507505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.507591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.507616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.507722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.507746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.507825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.507869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.508049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.508085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.508425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.508468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.508569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.508596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.508690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.508714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.508839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.508863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.508944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.508969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.509118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.509164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.509273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.509298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.509391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.509415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.509547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.509576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.509665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.509691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.509771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.509796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.509879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.509904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.509988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.510020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.510104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.510131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.510206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.605 [2024-11-18 08:09:55.510232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.605 qpair failed and we were unable to recover it. 00:36:02.605 [2024-11-18 08:09:55.510351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.510377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.510465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.510499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.510618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.510644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.510730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.510756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.510835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.510861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.510953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.510979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.511059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.511085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.511194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.511220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.511313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.511353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.511487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.511522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.511638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.511664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.511760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.511786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.511898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.511924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.512015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.512041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.512149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.512175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.512284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.512310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.512397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.512423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.512543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.512570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.512652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.512677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.512793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.512819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.512931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.512956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.513043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.513069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.513161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.513189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.513311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.513338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.513466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.513502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.513591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.513618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.513729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.513756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.513865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.513890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.513966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.513992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.514073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.514100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.514185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.514211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.514322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.514348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.514461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.514488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.514586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.514612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.514753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.514779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.514889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.514916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.515083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.515132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.515215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.606 [2024-11-18 08:09:55.515246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.606 qpair failed and we were unable to recover it. 00:36:02.606 [2024-11-18 08:09:55.515327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.515353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.515446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.515472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.515570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.515596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.515679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.515705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.515796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.515821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.515937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.515962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.516042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.516069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.516181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.516207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.516290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.516315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.516415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.516440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.516560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.516588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.516708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.516734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.516814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.516841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.516938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.516964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.517073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.517099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.517241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.517267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.517349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.517375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.517517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.517544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.517662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.517688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.517773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.517798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.517914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.517940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.518027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.518053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.518134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.518160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.518263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.518289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.518430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.518457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.518549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.518576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.518730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.518756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.518867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.518894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.518978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.519005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.519106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.519132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.519271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.519297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.519406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.519432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.519523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.519550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.519631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.519657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.519748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.519774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.519883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.519909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.519997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.520024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.607 [2024-11-18 08:09:55.520135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.607 [2024-11-18 08:09:55.520161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.607 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.520251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.520276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.520385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.520415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.520507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.520534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.520646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.520672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.520748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.520773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.520857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.520883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.520965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.520992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.521100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.521126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.521214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.521244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.521328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.521355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.521438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.521464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.521549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.521576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.521652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.521677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.521789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.521822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.521896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.521921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.522004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.522029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.522118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.522146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.522236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.522263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.522353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.522391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.522519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.522548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.522635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.522662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.522748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.522774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.522892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.522918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.523002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.523028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.523144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.523170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.523286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.523329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.523407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.523433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.523554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.523581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.523701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.523727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.523821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.523846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.523926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.523951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.524039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.524065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.524181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.524209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.524299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.524326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.524435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.524462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.524560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.524587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.524703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.608 [2024-11-18 08:09:55.524730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.608 qpair failed and we were unable to recover it. 00:36:02.608 [2024-11-18 08:09:55.524870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.524896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.524977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.525003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.525117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.525143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.525226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.525253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.525337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.525368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.525508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.525535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.525618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.525644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.525761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.525787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.525872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.525897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.526041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.526067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.526221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.526248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.526331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.526357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.526438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.526465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.526590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.526617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.526701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.526728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.526844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.526870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.526980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.527007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.527097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.527123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.527213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.527240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.527365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.527393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.527480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.527512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.527596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.527623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.527713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.527738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.527847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.527873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.527995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.528021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.528109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.528134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.528223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.528249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.528325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.528351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.528442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.528469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.528590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.528616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.528728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.528754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.528844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.528870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.528995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.529020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.529141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.609 [2024-11-18 08:09:55.529166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.609 qpair failed and we were unable to recover it. 00:36:02.609 [2024-11-18 08:09:55.529254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.529280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.529355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.529381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.529500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.529526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.529611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.529637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.529728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.529754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.529860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.529886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.529971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.530022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.530147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.530193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.530319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.530345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.530469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.530516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.530647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.530681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.530776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.530803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.530914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.530940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.531078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.531105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.531221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.531247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.531364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.531402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.531497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.531525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.531642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.531670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.531763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.531789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.531876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.531903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.531985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.532010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.532148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.532173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.532265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.532291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.532387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.532415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.532512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.532541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.532630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.532657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.532747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.532773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.532887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.532935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.533064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.533090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.533198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.533224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.533306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.533332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.533439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.533465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.533549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.533575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.533665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.533693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.533784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.533811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.533921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.533948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.534039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.534065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.610 [2024-11-18 08:09:55.534155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.610 [2024-11-18 08:09:55.534183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.610 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.534297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.534323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.534413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.534439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.534556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.534582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.534656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.534682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.534772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.534823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.534999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.535047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.535154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.535205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.535293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.535319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.535464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.535498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.535584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.535609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.535708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.535745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.535842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.535869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.535978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.536010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.536102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.536129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.536245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.536270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.536354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.536380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.536468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.536499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.536612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.536638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.536766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.536791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.536895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.536932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.537040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.537075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.537187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.537223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.537338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.537373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.537519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.537546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.537662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.537689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.537798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.537824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.537945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.537971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.538071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.538106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.538212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.538248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.538388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.538414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.538532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.538559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.538639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.538665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.538745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.538771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.538908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.538953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.539103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.539138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.539267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.539301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.539405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.611 [2024-11-18 08:09:55.539448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.611 qpair failed and we were unable to recover it. 00:36:02.611 [2024-11-18 08:09:55.539542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.539569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.539662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.539688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.539822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.539861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.539982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.540009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.540192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.540240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.540319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.540347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.540456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.540482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.540600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.540627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.540735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.540762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.540846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.540872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.540959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.540986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.541074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.541102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.541196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.541221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.541372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.541398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.541520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.541549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.541646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.541676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.541802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.541829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.541941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.541966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.542105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.542130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.542246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.542272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.542356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.542382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.542484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.542520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.542603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.542631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.542746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.542792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.542999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.543034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.543178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.543213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.543328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.543354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.543447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.543473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.543563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.543589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.543669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.543695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.543867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.543902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.544011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.544045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.544276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.544312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.544458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.544521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.544620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.544647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.544735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.544760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.544900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.544926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.612 [2024-11-18 08:09:55.545012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.612 [2024-11-18 08:09:55.545038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.612 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.545216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.545241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.545361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.545387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.545473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.545504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.545589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.545615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.545711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.545740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.545924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.545951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.546086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.546121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.546230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.546257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.546365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.546391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.546468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.546501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.546580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.546605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.546696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.546723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.546880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.546905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.547019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.547046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.547133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.547159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.547249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.547275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.547361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.547386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.547502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.547536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.547655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.547681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.547799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.547826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.547899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.547925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.548009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.548035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.548147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.548173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.548259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.548287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.548408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.548434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.548545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.548572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.548661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.548688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.548803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.548830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.548942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.548967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.549048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.549074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.549162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.549187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.549305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.549332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.549438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.613 [2024-11-18 08:09:55.549464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.613 qpair failed and we were unable to recover it. 00:36:02.613 [2024-11-18 08:09:55.549551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.549579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.549682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.549721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.549849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.549877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.550002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.550030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.550143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.550169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.550255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.550281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.550368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.550394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.550506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.550533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.550627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.550653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.550767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.550793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.550880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.550906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.551021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.551047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.551128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.551154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.551251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.551276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.551363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.551388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.551508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.551535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.551642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.551669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.551757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.551782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.551895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.551921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.552064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.552089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.552181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.552208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.552293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.552318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.552457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.552482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.552575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.552602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.552688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.552719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.552811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.552840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.552930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.552956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.553100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.553126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.553212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.553239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.553329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.553356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.553474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.553508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.553632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.553659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.553809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.553834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.553954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.553982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.554093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.554119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.554205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.554232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.614 [2024-11-18 08:09:55.554326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.614 [2024-11-18 08:09:55.554351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.614 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.554465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.554500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.554590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.554615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.554704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.554731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.554842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.554868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.554987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.555014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.555097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.555123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.555212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.555237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.555326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.555353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.555472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.555505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.555592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.555618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.555702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.555727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.555838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.555865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.555953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.555979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.556092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.556118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.556207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.556233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.556317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.556343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.556455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.556482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.556575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.556602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.556714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.556740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.556845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.556872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.557018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.557071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.557207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.557251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.557367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.557393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.557482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.557513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.557609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.557635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.557753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.557780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.557919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.557945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.558065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.558097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.558225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.558254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.558374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.558400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.558512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.558538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.558679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.558706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.615 qpair failed and we were unable to recover it. 00:36:02.615 [2024-11-18 08:09:55.558822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.615 [2024-11-18 08:09:55.558847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.558956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.558982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.559073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.559099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.559215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.559241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.559336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.559364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.559448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.559474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.559628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.559675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.559790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.559842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.560032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.560081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.560223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.560271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.560381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.560408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.560537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.560563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.560660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.560686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.560768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.560794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.560892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.560917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.561033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.561058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.561168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.561194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.561318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.561345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.561467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.561497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.561591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.561617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.561732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.561758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.561868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.561894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.562018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.562044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.562134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.562159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.562252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.562277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.562355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.562381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.562484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.562516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.562601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.562627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.562744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.562770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.562888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.562913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.562992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.563018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.563136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.563161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.563276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.563300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.563409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.563435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.563526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.616 [2024-11-18 08:09:55.563553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.616 qpair failed and we were unable to recover it. 00:36:02.616 [2024-11-18 08:09:55.563647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.563677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.563780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.563806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.563889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.563914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.564009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.564035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.564108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.564134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.564249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.564274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.564365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.564391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.564510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.564537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.564662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.564687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.564783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.564809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.564925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.564951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.565054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.565080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.565167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.565192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.565300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.565326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.565447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.565472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.565560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.565586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.565707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.565733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.565813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.565840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.565971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.565996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.566074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.566099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.566206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.566232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.566312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.566338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.566460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.566487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.566626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.566652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.566766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.566791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.566880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.566904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.567036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.567061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.567176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.567201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.567318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.567344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.567460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.567485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.567607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.567631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.617 [2024-11-18 08:09:55.567718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.617 [2024-11-18 08:09:55.567744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.617 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.567862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.567888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.567998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.568024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.568107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.568132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.568237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.568262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.568379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.568405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.568502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.568527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.568648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.568673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.568788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.568814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.568963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.568989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.569086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.569112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.569200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.569225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.569312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.569337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.569428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.569455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.569578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.569604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.569719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.569744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.569829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.569854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.569974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.570000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.570082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.570107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.570187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.570213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.570327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.570352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.570434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.570459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.570566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.570592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.570712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.570738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.570824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.570849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.570968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.570994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.571076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.571101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.571215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.571240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.571323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.571347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.571455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.571480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.571577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.571601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.618 qpair failed and we were unable to recover it. 00:36:02.618 [2024-11-18 08:09:55.571700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.618 [2024-11-18 08:09:55.571725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.571799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.571824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.571904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.571928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.572039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.572064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.572177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.572202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.572342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.572371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.572478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.572509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.572598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.572624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.572743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.572771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.572858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.572885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.573000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.573026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.573142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.573167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.573289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.573315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.573431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.573456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.573585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.573611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.573696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.573721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.573805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.573831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.573963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.573988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.574101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.574127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.574216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.574243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.574383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.574409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.574504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.574530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.574622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.574648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.574760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.574786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.574870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.574895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.574980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.575006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.575112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.575138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.575236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.575261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.575373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.575399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.575518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.575544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.575624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.575649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.575733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.575758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.619 [2024-11-18 08:09:55.575844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.619 [2024-11-18 08:09:55.575871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.619 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.576008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.576033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.576123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.576149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.576265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.576290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.576433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.576459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.576560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.576585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.576669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.576695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.576787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.576812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.576920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.576945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.577028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.577054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.577169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.577195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.577320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.577344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.577484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.577524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.577600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.577630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.577739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.577765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.577886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.577912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.578021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.578046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.578144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.578168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.578285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.578310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.578433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.578459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.578555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.578582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.578667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.578692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.578775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.578801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.578939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.578965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.579079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.579105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.579191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.579218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.579317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.579342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.579459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.579486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.579579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.579604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.579713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.579739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.579829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.579854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.579942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.579968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.580085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.580110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.580199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.580225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.580314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.580340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.580455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.580481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.620 [2024-11-18 08:09:55.580605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.620 [2024-11-18 08:09:55.580631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.620 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.580741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.580767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.580879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.580904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.581001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.581027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.581116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.581142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.581257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.581284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.581376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.581401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.581507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.581534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.581674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.581699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.581810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.581835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.581951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.581976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.582087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.582112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.582226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.582251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.582359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.582384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.582464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.582488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.582586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.582612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.582707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.582732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.582849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.582880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.582972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.582997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.583108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.583133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.583244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.583269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.583381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.583406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.583498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.583526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.583608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.583633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.583768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.583793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.583915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.583941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.584028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.584053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.584138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.584164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.584244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.584269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.584359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.584385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.584479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.584512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.584772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.584798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.584907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.621 [2024-11-18 08:09:55.584934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.621 qpair failed and we were unable to recover it. 00:36:02.621 [2024-11-18 08:09:55.585025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.585051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.585169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.585196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.585313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.585338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.585449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.585474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.585597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.585622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.585742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.585768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.585882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.585908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.586029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.586054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.586165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.586191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.586339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.586364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.586449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.586474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.586641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.586667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.586758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.586783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.586862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.586888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.587020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.587044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.587157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.587182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.587264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.587289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.587364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.587390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.587472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.587506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.587639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.587665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.587760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.587787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.587900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.587926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.588019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.588044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.588161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.588186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.588274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.588304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.588391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.588417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.588535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.588560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.588701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.588727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.588848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.588874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.588993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.589020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.589136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.589162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.589302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.589327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.589420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.589446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.589542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.589568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.589686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.589712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.589829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.589855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.589941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.589967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.590055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.590081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.590175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.590201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.590292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.590318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.622 [2024-11-18 08:09:55.590409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.622 [2024-11-18 08:09:55.590435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.622 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.590553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.590579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.590687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.590715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.590807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.590833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.590921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.590947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.591066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.591092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.591205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.591231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.591374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.591399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.591502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.591529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.591630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.591656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.591741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.591766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.591919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.591945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.592038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.592063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.592153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.592178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.592292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.592317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.592399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.592425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.592544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.592570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.592653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.592678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.592783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.592808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.592922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.592947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.593088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.593113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.593198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.593223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.593303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.593329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.593445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.593470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.593558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.593589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.593700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.593748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.593860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.593885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.594004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.594029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.594138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.594164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.594270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.594295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.594385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.594410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.594513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.594540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.594627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.594653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.594749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.594774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.594886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.594913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.623 [2024-11-18 08:09:55.595033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.623 [2024-11-18 08:09:55.595059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.623 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.595147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.595173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.595260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.595287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.595386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.595412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.595502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.595529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.595620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.595648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.595730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.595755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.595869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.595895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.596007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.596033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.596119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.596145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.596238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.596263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.596347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.596372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.596527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.596553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.596674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.596701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.596843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.596869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.596980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.597007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.597126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.597152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.597264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.597289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.597399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.597426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.597592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.597639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.597732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.597757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.597843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.597869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.597975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.598001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.598086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.598112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.598197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.598223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.598332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.624 [2024-11-18 08:09:55.598358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.624 qpair failed and we were unable to recover it. 00:36:02.624 [2024-11-18 08:09:55.598442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.598467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.598575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.598614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.598738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.598766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.598854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.598886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.598980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.599007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.599089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.599115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.599204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.599230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.599316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.599341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.599452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.599477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.599573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.599600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.599707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.599738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.599870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.599897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.599981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.600007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.600125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.600151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.600266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.600292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.600406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.600433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.600519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.600546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.600629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.600655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.600742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.600768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.600860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.600887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.600998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.601025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.601148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.601177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.601290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.601317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.601404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.601430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.601547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.601574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.601684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.601716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.601819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.601850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.601946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.601977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.602076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.602107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.602276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.602308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.602448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.602479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.602598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.602625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.602721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.602747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.602867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.602894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.603020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.603054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.603197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.603228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.603339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.603365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.625 qpair failed and we were unable to recover it. 00:36:02.625 [2024-11-18 08:09:55.603455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.625 [2024-11-18 08:09:55.603481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.603568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.603595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.603706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.603731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.603812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.603857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.604017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.604049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.604148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.604179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.604270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.604306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.604438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.604470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.604589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.604615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.604703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.604731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.604877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.604910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.605007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.605040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.605172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.605203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.605358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.605384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.605500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.605527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.605616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.605643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.605773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.605805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.605922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.605958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.606092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.606123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.606262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.606292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.606406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.606433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.606512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.606539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.606632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.606658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.606774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.606800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.606940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.606966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.607082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.607130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.607249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.607285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.607400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.607431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.607548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.607575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.607655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.607681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.607771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.607799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.607910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.607936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.608028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.608059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.608166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.608198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.608317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.608349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.608459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.608514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.608607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.608633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.626 [2024-11-18 08:09:55.608751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.626 [2024-11-18 08:09:55.608777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.626 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.608921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.608951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.609083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.609114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.609206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.609233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.609363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.609394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.609497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.609540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.609655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.609681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.609817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.609848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.609944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.609976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.610138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.610195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.610317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.610353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.610514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.610553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.610656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.610684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.610855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.610902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.611016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.611043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.611134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.611161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.611250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.611276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.611369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.611396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.611479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.611511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.611610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.611636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.611752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.611778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.611867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.611893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.612012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.612038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.612124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.612150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.612266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.612291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.612415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.612441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.612525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.612552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.612633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.612659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.612746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.612771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.612858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.612884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.613001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.613027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.613109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.613135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.613249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.613274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.613399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.613425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.613531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.613557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.613646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.613672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.613792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.613818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.613902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.613927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.627 qpair failed and we were unable to recover it. 00:36:02.627 [2024-11-18 08:09:55.614008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.627 [2024-11-18 08:09:55.614034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.614154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.614180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.614302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.614328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.614409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.614435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.614534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.614559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.614679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.614708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.614799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.614826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.614916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.614942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.615028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.615055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.615136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.615162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.615254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.615280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.615391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.615422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.615508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.615555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.615658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.615689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.615844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.615882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.616069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.616118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.616239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.616271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.616409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.616437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.616559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.616585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.616701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.616731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.616836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.616863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.616969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.617000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.617106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.617132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.617254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.617280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.617395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.617421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.617507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.617533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.617674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.617701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.617785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.617811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.617911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.617936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.618031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.618057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.618170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.618195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.618307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.628 [2024-11-18 08:09:55.618333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.628 qpair failed and we were unable to recover it. 00:36:02.628 [2024-11-18 08:09:55.618440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.618465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.618560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.618587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.618700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.618726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.618873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.618898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.619012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.619039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.619191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.619216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.619310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.619337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.619451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.619477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.619570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.619596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.619714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.619739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.619850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.619875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.619965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.619991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.620081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.620107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.620221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.620246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.620327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.620353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.620447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.620473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.620593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.620619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.620736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.620762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.620844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.620870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.620961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.620991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.621104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.621130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.621221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.621246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.621357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.621383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.621469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.621501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.621593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.621620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.621707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.621732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.621840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.621866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.621942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.621968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.622076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.622103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.622249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.622275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.622389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.622415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.622497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.622523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.622638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.622664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.622782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.622807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.622896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.622922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.623002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.623028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.623143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.623169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.623243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.629 [2024-11-18 08:09:55.623269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.629 qpair failed and we were unable to recover it. 00:36:02.629 [2024-11-18 08:09:55.623379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.623405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.623500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.623526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.623642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.623669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.623781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.623807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.623921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.623948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.624032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.624059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.624170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.624196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.624328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.624367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.624471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.624506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.624593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.624620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.624734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.624762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.624902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.624936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.625076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.625112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.625291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.625318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.625406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.625433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.625550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.625577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.625690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.625718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.625826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.625861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.626033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.626067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.626174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.626201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.626371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.626397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.626510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.626542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.626659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.626685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.626837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.626871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.626989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.627023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.627224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.627259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.627372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.627407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.627532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.627559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.627679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.627705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.627847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.627882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.628036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.628073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.628242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.628277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.628437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.628474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.628627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.628654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.628746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.628773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.628915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.628951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.629080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.629116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.630 qpair failed and we were unable to recover it. 00:36:02.630 [2024-11-18 08:09:55.629236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.630 [2024-11-18 08:09:55.629262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.629406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.629439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.629587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.629613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.629705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.629730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.629859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.629893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.630040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.630074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.630190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.630232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.630364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.630400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.630560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.630586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.630682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.630710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.630826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.630853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.630949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.631001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.631145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.631180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.631345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.631390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.631476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.631511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.631600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.631627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.631740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.631766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.631899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.631934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.632059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.632106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.632279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.632313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.632456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.632482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.632578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.632604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.632692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.632717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.632850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.632886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.632996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.633046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.633195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.633231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.633347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.633373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.633482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.633513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.633599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.633626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.633762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.633796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.633936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.633990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.634148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.634183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.634362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.634412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.634580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.634609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.634738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.634765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.634905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.634940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.635075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.635109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.635231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.631 [2024-11-18 08:09:55.635265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.631 qpair failed and we were unable to recover it. 00:36:02.631 [2024-11-18 08:09:55.635390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.635416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.635502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.635529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.635630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.635655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.635747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.635773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.635886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.635935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.636107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.636141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.636254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.636292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.636416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.636443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.636523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.636555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.636678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.636724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.636872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.636912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.637034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.637068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.637170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.637203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.637365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.637401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.637515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.637542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.637656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.637685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.637800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.637827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.637925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.637959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.638107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.638141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.638292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.638326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.638470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.638518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.638612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.638641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.638782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.638831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.638950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.639009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.639138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.639187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.639271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.639296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.632 [2024-11-18 08:09:55.639382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.632 [2024-11-18 08:09:55.639409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.632 qpair failed and we were unable to recover it. 00:36:02.913 [2024-11-18 08:09:55.639531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.913 [2024-11-18 08:09:55.639558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.913 qpair failed and we were unable to recover it. 00:36:02.913 [2024-11-18 08:09:55.639682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.913 [2024-11-18 08:09:55.639709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.913 qpair failed and we were unable to recover it. 00:36:02.913 [2024-11-18 08:09:55.639804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.913 [2024-11-18 08:09:55.639829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.913 qpair failed and we were unable to recover it. 00:36:02.913 [2024-11-18 08:09:55.639911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.913 [2024-11-18 08:09:55.639938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.913 qpair failed and we were unable to recover it. 00:36:02.913 [2024-11-18 08:09:55.640051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.913 [2024-11-18 08:09:55.640076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.913 qpair failed and we were unable to recover it. 00:36:02.913 [2024-11-18 08:09:55.640161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.913 [2024-11-18 08:09:55.640187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.913 qpair failed and we were unable to recover it. 00:36:02.913 [2024-11-18 08:09:55.640271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.913 [2024-11-18 08:09:55.640297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.913 qpair failed and we were unable to recover it. 00:36:02.913 [2024-11-18 08:09:55.640409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.913 [2024-11-18 08:09:55.640436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.913 qpair failed and we were unable to recover it. 00:36:02.913 [2024-11-18 08:09:55.640533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.913 [2024-11-18 08:09:55.640559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.913 qpair failed and we were unable to recover it. 00:36:02.913 [2024-11-18 08:09:55.640648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.913 [2024-11-18 08:09:55.640674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.913 qpair failed and we were unable to recover it. 00:36:02.913 [2024-11-18 08:09:55.640755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.913 [2024-11-18 08:09:55.640781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.913 qpair failed and we were unable to recover it. 00:36:02.913 [2024-11-18 08:09:55.640888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.640913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.641033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.641058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.641177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.641203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.641289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.641316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.641427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.641453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.641555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.641581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.641666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.641691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.641802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.641827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.641939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.641965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.642082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.642107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.642230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.642256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.642336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.642361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.642453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.642480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.642604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.642630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.642717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.642745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.642891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.642921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.643018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.643045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.643154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.643179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.643269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.643296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.643384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.643410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.643499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.643532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.643621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.643646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.643729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.643755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.643842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.643867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.643951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.643975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.644089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.644114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.644230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.644256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.644336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.644361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.644448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.644473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.644592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.644618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.644737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.644762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.644847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.644872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.644967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.644993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.645077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.645104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.645186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.645212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.645301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.645328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.645409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.914 [2024-11-18 08:09:55.645435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.914 qpair failed and we were unable to recover it. 00:36:02.914 [2024-11-18 08:09:55.645517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.645543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.645656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.645681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.645766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.645791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.645881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.645905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.645990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.646015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.646108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.646135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.646254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.646279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.646371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.646397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.646495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.646521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.646614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.646639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.646722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.646748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.646833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.646858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.646973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.646998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.647087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.647114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.647228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.647253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.647340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.647366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.647448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.647475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.647608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.647634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.647738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.647768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.647881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.647906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.648024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.648049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.648132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.648158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.648271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.648297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.648400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.648439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.648596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.648624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.648744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.648771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.648889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.648916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.649014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.649040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.649125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.649151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.649294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.649320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.649397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.649423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.649513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.649563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.649692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.649729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.649875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.649911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.650042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.650077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.650256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.650304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.650422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.915 [2024-11-18 08:09:55.650448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.915 qpair failed and we were unable to recover it. 00:36:02.915 [2024-11-18 08:09:55.650563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.650598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.650752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.650803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.650941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.650989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.651135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.651183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.651282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.651307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.651418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.651444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.651568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.651597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.651684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.651710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.651804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.651830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.651952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.651978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.652056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.652082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.652198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.652223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.652359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.652385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.652472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.652502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.652592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.652618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.652727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.652753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.652838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.652864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.652979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.653004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.653141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.653191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.653309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.653335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.653449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.653474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.653564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.653597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.653683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.653709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.653823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.653849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.653961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.653986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.654069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.654095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.654177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.654203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.654290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.654315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.654416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.654458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.654558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.654586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.654677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.654704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.654814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.654840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.654980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.655008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.655093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.655120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.655206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.655252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.655395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.655439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.655537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.916 [2024-11-18 08:09:55.655564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.916 qpair failed and we were unable to recover it. 00:36:02.916 [2024-11-18 08:09:55.655648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.655674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.655805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.655837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.656001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.656033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.656257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.656290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.656408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.656434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.656550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.656578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.656665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.656691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.656780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.656806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.656887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.656914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.657018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.657049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.657176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.657208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.657379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.657412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.657522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.657567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.657656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.657685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.657776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.657808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.657939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.657973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.658086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.658120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.658262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.658295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.658399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.658433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.658549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.658576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.658663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.658689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.658773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.658800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.658939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.658971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.659110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.659143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.659301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.659386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.659518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.659568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.659682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.659708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.659867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.659925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.660070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.660118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.660262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.660313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.660455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.660481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.660607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.660634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.660769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.660801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.660919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.660951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.661060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.661086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.661173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.661199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.917 [2024-11-18 08:09:55.661319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.917 [2024-11-18 08:09:55.661345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.917 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.661431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.661456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.661589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.661616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.661726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.661751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.661824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.661850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.661948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.661973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.662053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.662077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.662171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.662197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.662291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.662316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.662430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.662457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.662577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.662602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.662697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.662724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.662841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.662866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.662968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.662994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.663109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.663135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.663262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.663288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.663373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.663399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.663486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.663544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.663633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.663659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.663774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.663799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.663911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.663936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.664019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.664044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.664153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.664179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.664259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.664286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.664396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.664422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.664523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.664563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.664660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.664688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.664767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.664794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.664913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.664945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.665066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.665092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.665217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.665243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.918 [2024-11-18 08:09:55.665335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.918 [2024-11-18 08:09:55.665361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.918 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.665477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.665510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.665673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.665723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.665866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.665912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.666032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.666069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.666246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.666310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.666450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.666476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.666599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.666626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.666720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.666746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.666909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.666944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.667066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.667109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.667268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.667300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.667412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.667444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.667593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.667619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.667700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.667725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.667831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.667856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.667994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.668028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.668135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.668169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.668289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.668332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.668476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.668519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.668624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.668651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.668812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.668846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.668959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.668985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.669147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.669180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.669309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.669343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.669475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.669518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.669616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.669641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.669770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.669809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.669932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.669968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.670099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.670124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.670371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.670397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.670558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.670595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.670682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.670708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.670842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.670877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.671047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.671081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.671200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.671234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.671348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.671390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.671474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.919 [2024-11-18 08:09:55.671512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.919 qpair failed and we were unable to recover it. 00:36:02.919 [2024-11-18 08:09:55.671622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.671648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.671760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.671785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.671896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.671930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.672078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.672112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.672320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.672354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.672487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.672520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.672598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.672623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.672729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.672755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.672929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.672963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.673124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.673157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.673266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.673299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.673467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.673508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.673618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.673644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.673776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.673814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.673948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.673995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.674111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.674146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.674253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.674298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.674445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.674487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.674613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.674639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.674732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.674758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.674907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.674955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.675093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.675127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.675267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.675301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.675472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.675517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.675601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.675626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.675723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.675748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.675899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.675929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.676074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.676119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.676237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.676272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.676459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.676504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.676662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.676688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.676828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.676864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.676984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.677029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.677218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.677259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.677441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.677477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.677626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.677651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.677734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.677760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.677851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.920 [2024-11-18 08:09:55.677876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.920 qpair failed and we were unable to recover it. 00:36:02.920 [2024-11-18 08:09:55.677987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.678022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.678173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.678219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.678376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.678411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.678574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.678601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.678714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.678739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.678889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.678924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.679122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.679156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.679277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.679313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.679459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.679505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.679589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.679615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.679728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.679753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.679874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.679899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.680047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.680081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.680223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.680259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.680377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.680427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.680611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.680656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.680804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.680843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.680961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.681001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.681213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.681250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.681375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.681412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.681571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.681597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.681709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.681736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.681849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.681875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.681992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.682041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.682169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.682205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.682402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.682467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.682639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.682664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.682782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.682808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.682972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.683009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.683163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.683227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.683394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.683470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.683696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.683722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.683863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.683927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.684115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.684141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.684325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.684389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.684635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.684700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.684857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.684923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.921 [2024-11-18 08:09:55.685065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.921 [2024-11-18 08:09:55.685134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.921 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.685298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.685362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.685539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.685576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.685721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.685757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.685936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.686000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.686160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.686197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.686356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.686394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.686538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.686575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.686729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.686766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.686892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.686930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.687100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.687139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.687338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.687377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.687516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.687557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.687722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.687760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.687909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.687947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.688076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.688114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.688313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.688349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.688512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.688549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.688697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.688739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.688923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.688959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.689111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.689149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.689327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.689363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.689546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.689584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.689693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.689731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.689881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.689920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.690057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.690094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.690253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.690292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.690441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.690480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.690653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.690692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.690860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.690900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.922 [2024-11-18 08:09:55.691033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.922 [2024-11-18 08:09:55.691072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.922 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.691234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.691272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.691408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.691447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.691631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.691671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.691838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.691871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.692011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.692045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.692225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.692263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.692379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.692417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.692577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.692617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.692743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.692781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.692911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.692950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.693135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.693173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.693400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.693464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.693650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.693688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.693840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.693878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.694106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.694170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.694382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.694446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.694676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.694740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.694925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.694988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.695211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.695275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.695559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.695598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.695757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.695795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.695927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.695967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.696124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.696162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.696350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.696389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.696560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.696595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.696738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.696771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.696896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.696934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.697124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.697168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.697335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.697373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.697537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.697576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.697695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.697751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.697947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.697988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.923 qpair failed and we were unable to recover it. 00:36:02.923 [2024-11-18 08:09:55.698150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.923 [2024-11-18 08:09:55.698189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.698335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.698375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.698572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.698614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.698780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.698821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.699003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.699044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.699213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.699254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.699460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.699510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.699650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.699690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.699856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.699896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.700070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.700110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.700247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.700287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.700421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.700461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.700608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.700648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.700817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.700857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.701018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.701058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.701220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.701261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.701426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.701466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.701649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.701689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.701857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.701897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.702050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.702090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.702253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.702294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.702433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.702475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.702650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.702692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.702836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.702876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.703082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.703122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.703320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.703361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.703510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.703551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.703747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.703787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.703924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.703964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.704085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.704127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.704304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.704345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.704524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.704558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.704704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.704737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.704901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.704941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.705083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.705123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.705281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.705329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.705477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.705527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.705687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.924 [2024-11-18 08:09:55.705728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.924 qpair failed and we were unable to recover it. 00:36:02.924 [2024-11-18 08:09:55.705927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.705968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.706100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.706140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.706329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.706372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.706514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.706560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.706726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.706768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.706941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.706984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.707162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.707205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.707385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.707426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.707604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.707646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.707807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.707848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.707973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.708013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.708145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.708185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.708341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.708381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.708557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.708591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.708700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.708734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.708875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.708916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.709046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.709085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.709220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.709258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.709463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.709511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.709666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.709707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.709868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.709908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.710091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.710124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.710223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.710257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.710427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.710468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.710692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.710726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.710897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.710930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.711071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.711114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.711290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.711332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.711542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.711577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.711745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.711801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.711986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.712029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.712168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.712212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.712418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.712461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.712656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.712700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.712859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.712902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.713075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.713118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.713296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.713339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.925 qpair failed and we were unable to recover it. 00:36:02.925 [2024-11-18 08:09:55.713487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.925 [2024-11-18 08:09:55.713540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.713719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.713762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.713908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.713952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.714116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.714159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.714330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.714373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.714517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.714563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.714735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.714777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.714982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.715025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.715192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.715235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.715534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.715578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.715796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.715839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.715976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.716018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.716187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.716230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.716369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.716412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.716602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.716646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.716853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.716896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.717105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.717148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.717353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.717395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.717549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.717593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.717735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.717780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.717959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.718001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.718158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.718201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.718421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.718485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.718659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.718702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.718877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.718919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.719089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.719133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.719310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.719352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.719508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.719558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.719731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.719774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.719949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.719992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.720195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.720239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.720406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.720448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.720675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.720718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.720887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.720930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.721058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.721101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.721235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.721277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.721456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.721506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.721674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.926 [2024-11-18 08:09:55.721717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.926 qpair failed and we were unable to recover it. 00:36:02.926 [2024-11-18 08:09:55.721923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.721965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.722095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.722138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.722292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.722335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.722523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.722567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.722704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.722749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.722898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.722942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.723084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.723127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.723307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.723352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.723555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.723601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.723753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.723799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.723976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.724022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.724189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.724235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.724370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.724415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.724583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.724628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.724828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.724873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.725056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.725101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.725278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.725323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.725546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.725593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.725771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.725817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.726032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.726076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.726294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.726339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.726526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.726573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.726741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.726788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.726967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.727013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.727211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.727245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.727414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.727448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.727608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.727642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.727745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.727778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.727923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.727956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.728138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.728191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.728337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.728382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.728601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.728647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.728831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.927 [2024-11-18 08:09:55.728876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.927 qpair failed and we were unable to recover it. 00:36:02.927 [2024-11-18 08:09:55.729030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.729075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.729288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.729332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.729510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.729556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.729708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.729755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.729950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.729996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.730191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.730236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.730416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.730462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.730695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.730740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.730911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.730957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.731200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.731234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.731339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.731373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.731584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.731630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.731770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.731816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.731999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.732044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.732217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.732262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.732479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.732535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.732710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.732755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.732950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.732996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.733176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.733221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.733443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.733488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.733718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.733764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.733957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.734002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.734183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.734247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.734516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.734581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.734757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.734803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.734941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.734986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.735214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.735259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.735453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.735516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.735712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.735757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.735936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.735983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.736201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.736247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.736455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.736513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.736702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.736751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.736906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.736953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.737180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.737228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.737419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.737466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.737646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.737701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.928 [2024-11-18 08:09:55.737898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.928 [2024-11-18 08:09:55.737945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.928 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.738137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.738170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.738390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.738437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.738633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.738682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.738876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.738924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.739105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.739153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.739319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.739367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.739552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.739602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.739871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.739949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.740268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.740346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.740563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.740614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.740781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.740830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.741018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.741067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.741240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.741291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.741464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.741523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.741725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.741778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.742005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.742053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.742281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.742330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.742528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.742577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.742770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.742817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.743004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.743053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.743280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.743328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.743529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.743577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.743776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.743825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.744018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.744066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.744253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.744300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.744508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.744558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.744710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.744758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.744987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.745035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.745264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.745312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.745465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.745533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.745694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.745743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.745948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.745996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.746176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.746225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.746451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.746508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.746686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.746734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.746932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.746981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.747167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.929 [2024-11-18 08:09:55.747218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.929 qpair failed and we were unable to recover it. 00:36:02.929 [2024-11-18 08:09:55.747440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.747519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.747764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.747823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.748020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.748072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.748338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.748398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.748666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.748718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.748964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.749016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.749182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.749235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.749480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.749564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.749790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.749838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.750002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.750051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.750280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.750336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.750592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.750645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.750875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.750927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.751131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.751183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.751350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.751403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.751664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.751717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.751920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.751972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.752176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.752227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.752410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.752461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.752661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.752712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.752923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.752974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.753188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.753241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.753430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.753484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.753729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.753781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.753995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.754047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.754215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.754267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.754466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.754536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.754774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.754825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.755028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.755080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.755241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.755293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.755537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.755590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.755805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.755856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.756057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.756108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.756324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.756377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.756601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.756654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.756865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.756918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.757114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.757165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.930 qpair failed and we were unable to recover it. 00:36:02.930 [2024-11-18 08:09:55.757330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.930 [2024-11-18 08:09:55.757383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.757607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.757662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.757906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.757958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.758172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.758223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.758433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.758503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.758753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.758806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.758980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.759031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.759227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.759281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.759485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.759569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.759823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.759880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.760094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.760150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.760325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.760382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.760606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.760663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.760932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.760988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.761200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.761251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.761499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.761552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.761751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.761802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.762016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.762069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.762301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.762353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.762570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.762624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.762860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.762911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.763145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.763197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.763436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.763488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.763725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.763777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.763943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.763996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.764195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.764247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.764477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.764546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.764736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.764791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.764973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.765028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.765241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.765297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.765549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.765605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.765790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.765851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.766106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.766162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.766421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.766475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.766701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.766758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.767022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.767078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.767250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.767307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.767519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.931 [2024-11-18 08:09:55.767573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.931 qpair failed and we were unable to recover it. 00:36:02.931 [2024-11-18 08:09:55.767775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.767829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.768064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.768117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.768277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.768328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.768529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.768581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.768855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.768911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.769179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.769234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.769417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.769480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.769754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.769809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.770063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.770119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.770382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.770437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.770675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.770731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.770991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.771046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.771273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.771328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.771585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.771641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.771894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.771950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.772152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.772206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.772479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.772566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.772779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.772842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.773027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.773082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.773308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.773366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.773612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.773671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.773918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.773972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.774155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.774229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.774508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.774584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.774810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.774869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.775112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.775166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.775349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.775402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.775645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.775701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.775960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.776015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.776221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.776275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.776547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.776607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.776835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.776894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.777162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.932 [2024-11-18 08:09:55.777220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.932 qpair failed and we were unable to recover it. 00:36:02.932 [2024-11-18 08:09:55.777442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.777509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.777738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.777793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.778002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.778058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.778304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.778360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.778593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.778650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.778905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.778961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.779187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.779242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.779456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.779531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.779791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.779847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.780113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.780177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.780427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.780487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.780780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.780841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.781067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.781127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.781398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.781466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.781730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.781800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.782031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.782090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.782324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.782394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.782652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.782712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.782926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.782986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.783194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.783256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.783480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.783553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.783827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.783886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.784131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.784190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.784468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.784541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.784742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.784808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.785116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.785175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.785442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.785524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.785819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.785878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.786071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.786131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.786352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.786423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.786706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.786765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.786994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.787056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.787346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.787406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.787675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.787736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.787976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.788034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.788307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.788366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.788643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.788704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.933 [2024-11-18 08:09:55.788995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.933 [2024-11-18 08:09:55.789054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.933 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.789280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.789342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.789594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.789656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.789923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.789983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.790265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.790334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.790579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.790640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.790871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.790930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.791140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.791200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.791455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.791526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.791755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.791825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.792133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.792210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.792503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.792563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.792837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.792913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.793196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.793255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.793474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.793555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.793745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.793807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.794072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.794142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.794420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.794479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.794741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.794801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.795072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.795139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.795391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.795450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.795723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.795801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.796030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.796109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.796340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.796400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.796672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.796750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.797019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.797096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.797337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.797398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.797723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.797803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.798063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.798140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.798420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.798479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.798820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.798889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.799112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.799190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.799463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.799535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.799770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.799831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.800148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.800237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.800517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.800578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.800825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.800884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.801149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.934 [2024-11-18 08:09:55.801226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.934 qpair failed and we were unable to recover it. 00:36:02.934 [2024-11-18 08:09:55.801456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.801528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.801759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.801819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.802063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.802141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.802411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.802471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.802699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.802780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.803080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.803158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.803436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.803521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.803740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.803817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.804104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.804182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.804467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.804549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.804821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.804900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.805109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.805186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.805453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.805524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.805788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.805865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.806118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.806194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.806435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.806504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.806837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.806896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.807159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.807236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.807458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.807564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.807826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.807907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.808165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.808241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.808517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.808585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.808891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.808969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.809231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.809311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.809597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.809658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.809929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.809989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.810232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.810308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.810514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.810579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.810837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.810914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.811213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.811291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.811555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.811636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.811930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.812007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.812222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.812300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.812482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.812573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.812820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.812880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.813120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.813197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.813446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.813520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.935 qpair failed and we were unable to recover it. 00:36:02.935 [2024-11-18 08:09:55.813776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.935 [2024-11-18 08:09:55.813855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.814117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.814195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.814468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.814551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.814809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.814888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.815144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.815222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.815452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.815530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.815823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.815910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.816148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.816224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.816536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.816597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.816852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.816931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.817162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.817240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.817486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.817568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.817773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.817854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.818112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.818188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.818467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.818540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.818828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.818888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.819182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.819258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.819437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.819509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.819801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.819879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.820130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.820207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.820436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.820521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.820825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.820912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.821144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.821221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.821453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.821530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.821834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.821911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.822095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.822155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.822387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.822447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.822681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.822740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.823012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.823074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.823317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.823377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.823642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.823706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.823939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.823999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.824269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.824328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.824548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.824609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.824919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.824996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.825289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.825348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.825650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.825728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.825930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.936 [2024-11-18 08:09:55.826010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.936 qpair failed and we were unable to recover it. 00:36:02.936 [2024-11-18 08:09:55.826280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.826339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.826581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.826659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.826881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.826960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.827242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.827302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.827528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.827563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.827794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.827875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.828098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.828131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.828234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.828268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.828560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.828640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.828854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.828888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.829068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.829102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.829289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.829349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.829563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.829643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.829849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.829928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.830133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.830211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.830449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.830518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.830755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.830815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.831052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.831131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.831368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.831428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.831745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.831824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.832086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.832163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.832393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.832451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.832733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.832820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.833054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.833143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.833338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.833398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.833686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.833764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.834022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.834099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.834298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.834357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.834616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.834694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.834964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.835040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.835277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.835336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.835635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.835714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.836013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.937 [2024-11-18 08:09:55.836089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.937 qpair failed and we were unable to recover it. 00:36:02.937 [2024-11-18 08:09:55.836312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.836372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.836596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.836674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.836951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.837029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.837225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.837284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.837502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.837565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.837836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.837913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.838214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.838291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.838597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.838675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.838879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.838956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.839233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.839293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.839485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.839558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.839809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.839885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.840188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.840265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.840505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.840566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.840784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.840842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.841036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.841097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.841337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.841397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.841668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.841750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.842030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.842090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.842359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.842419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.842730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.842808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.843106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.843184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.843378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.843438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.843648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.843710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.843961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.844020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.844247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.844306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.844550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.844611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.844871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.844949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.845208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.845287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.845524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.845588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.845743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.845782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.845892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.845925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.846038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.846072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.846242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.846276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.846441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.846474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.846587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.846622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.846766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.846800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.938 qpair failed and we were unable to recover it. 00:36:02.938 [2024-11-18 08:09:55.846910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.938 [2024-11-18 08:09:55.846943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.847088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.847121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.847233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.847266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.847433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.847467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.847621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.847656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.847826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.847860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.847961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.847995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.848107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.848140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.848283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.848317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.848471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.848515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.848703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.848783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.849026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.849105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.849338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.849401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.849692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.849755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.850015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.850101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.850291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.850350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.850615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.850694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.850929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.851006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.851172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.851232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.851443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.851517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.851777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.851854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.852113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.852190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.852431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.852509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.852735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.852812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.853119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.853196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.853378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.853438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.853691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.853770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.854017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.854095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.854289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.854354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.854623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.854702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.854906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.854983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.855182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.855242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.855518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.855579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.855826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.855887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.856068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.856128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.856329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.856391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.856648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.856709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.856948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.939 [2024-11-18 08:09:55.857008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-11-18 08:09:55.857210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.857269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.857439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.857526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.857787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.857866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.858140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.858200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.858471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.858546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.858828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.858888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.859149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.859226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.859426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.859484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.859759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.859837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.860132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.860210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.860454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.860527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.860783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.860872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.861173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.861250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.861523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.861584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.861844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.861922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.862199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.862277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.862559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.862619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.862866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.862946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.863171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.863249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.863449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.863527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.863781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.863840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.864102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.864180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.864413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.864484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.864735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.864812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.865026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.865108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.865336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.865395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.865710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.865788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.866098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.866175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.866433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.866504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.866741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.866816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.867027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.867106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.867299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.867358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.867607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.867684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.867937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.867996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.868208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.868286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.868477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.868547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.868777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.868837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-11-18 08:09:55.869084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.940 [2024-11-18 08:09:55.869144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.869326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.869384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.869817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.869880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.870067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.870127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.870382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.870441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.870765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.870844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.871095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.871171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.871416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.871475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.871787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.871864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.872113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.872193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.872409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.872468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.872726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.872803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.873013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.873089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.873276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.873335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.873595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.873674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.873912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.873973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.874162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.874222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.874461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.874531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.874736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.874795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.875024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.875083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.875266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.875327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.875622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.875702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.875929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.876007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.876279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.876338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.876588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.876669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.876910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.876997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.877196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.877255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.877502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.877563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.877822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.877899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.878218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.878294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.878581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.878672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.878955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.879014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.879255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.879315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-11-18 08:09:55.879572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.941 [2024-11-18 08:09:55.879651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.879918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.879977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.880275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.880354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.880607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.880686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.880933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.881010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.881261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.881322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.881537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.881598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.881860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.881939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.882125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.882187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.882457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.882528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.882750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.882830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.883082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.883162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.883449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.883519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.883734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.883811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.884067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.884144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.884385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.884444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.884709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.884787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.885049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.885126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.885351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.885410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.885658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.885739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.886008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.886085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.886270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.886332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.886583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.886662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.886909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.886986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.887250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.887309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.887550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.887632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.887921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.887999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.888221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.888280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.888547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.888609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.888881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.888958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.889189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.889247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.889452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.889525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.889806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.889903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.890125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.890202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.890471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.890543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.890755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.890832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.891091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.891170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.891420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.942 [2024-11-18 08:09:55.891479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.942 qpair failed and we were unable to recover it. 00:36:02.942 [2024-11-18 08:09:55.891754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.891830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.892050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.892127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.892332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.892391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.892651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.892729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.892948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.893025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.893270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.893328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.893602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.893682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.893885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.893946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.894169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.894228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.894507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.894567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.894781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.894858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.895040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.895100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.895291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.895352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.895648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.895728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.895991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.896073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.896262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.896322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.896561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.896622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.896855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.896914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.897200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.897260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.897465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.897536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.897739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.897798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.898041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.898099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.898333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.898392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.898666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.898743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.898996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.899075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.899353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.899414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.899730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.899791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.899993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.900072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.900293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.900351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.900606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.900686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.900917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.900994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.901180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.901239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.901461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.901532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.901775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.901864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.902145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.902216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.902415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.902474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.902794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.902875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.943 [2024-11-18 08:09:55.903179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.943 [2024-11-18 08:09:55.903257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.943 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.903511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.903572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.903806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.903865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.904106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.904165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.904358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.904418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.904629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.904689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.904915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.904973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.905183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.905242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.905486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.905556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.905834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.905894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.906087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.906146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.906373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.906432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.906662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.906722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.906912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.906991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.907216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.907297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.907532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.907593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.907853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.907929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.908231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.908307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.908484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.908561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.908758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.908842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.909157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.909235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.909472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.909557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.909807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.909883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.910087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.910166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.910377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.910439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.910704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.910776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.911014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.911073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.911258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.911318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.911530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.911591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.911823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.911881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.912111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.912172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.912355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.912415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.912702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.912780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.913031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.913109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.913341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.913401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.913686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.913764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.914009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.914086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.914316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.944 [2024-11-18 08:09:55.914385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.944 qpair failed and we were unable to recover it. 00:36:02.944 [2024-11-18 08:09:55.914688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.914775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.915051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.915110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.915374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.915432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.915687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.915749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.915984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.916061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.916245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.916304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.916488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.916571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.916790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.916870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.917170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.917247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.917477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.917558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.917757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.917815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.918004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.918063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.918247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.918306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.918503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.918565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.918832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.918891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.919092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.919151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.919374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.919432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.919678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.919740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.919933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.920011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.920219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.920278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.920515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.920588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.920891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.920976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.921254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.921313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.921513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.921573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.921793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.921852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.922085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.922145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.922396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.922456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.922749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.922809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.923108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.923187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.923455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.923533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.923803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.923884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.924189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.924266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.924458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.924549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.924804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.924881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.925143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.925220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.925415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.925473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.925710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.945 [2024-11-18 08:09:55.925787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.945 qpair failed and we were unable to recover it. 00:36:02.945 [2024-11-18 08:09:55.926078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.926167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.926406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.926465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.926745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.926833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.927131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.927209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.927482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.927557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.927813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.927891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.928150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.928228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.928425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.928483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.928766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.928825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.929070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.929147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.929414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.929474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.929766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.929843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.930098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.930174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.930400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.930459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.930750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.930828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.931096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.931173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.931384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.931444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.931712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.931792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.932074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.932151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.932330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.932400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.932684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.932772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.933037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.933112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.933306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.933365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.933617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.933696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.933968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.934046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.934285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.934347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.934646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.934726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.934952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.935030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.935294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.935353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.935637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.935699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.936009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.936084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.936318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.936378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.936693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.936771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.946 [2024-11-18 08:09:55.937058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.946 [2024-11-18 08:09:55.937135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.946 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.937405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.937464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.937760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.937819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.938075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.938153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.938424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.938483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.938750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.938826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.939049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.939127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.939393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.939452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.939754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.939838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.940048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.940139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.940322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.940382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.940645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.940723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.940930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.941009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.941216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.941274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.941520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.941581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.941871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.941949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.942259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.942336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.942618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.942679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.942954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.943015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.943290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.943349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.943616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.943694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.943960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.944037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.944317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.944376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.944609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.944687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.944903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.944982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.945212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.945274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.945537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.945598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.945895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.945972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.946200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.946258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.946484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.946575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.946837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.946913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.947219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.947295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.947517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.947577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.947808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.947885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.948122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.948182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.948365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.948424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.948749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.948811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.949039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.947 [2024-11-18 08:09:55.949116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.947 qpair failed and we were unable to recover it. 00:36:02.947 [2024-11-18 08:09:55.949341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.949400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.949725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.949804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.950104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.950181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.950415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.950476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.950746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.950822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.951094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.951156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.951428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.951487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.951772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.951850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.952145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.952205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.952450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.952523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.952824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.952902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.953159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.953251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.953503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.953570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.953835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.953912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.954209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.954286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.954583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.954662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.954939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.955000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.955299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.955377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.955679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.955758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.956026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.956103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.956386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.956444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.956757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.956855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.957130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.957197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.957540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.957603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.957895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.957959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.958312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.958375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.958654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.958715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.958985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.959049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.959244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.959307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.959558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.959617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.959807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.959865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.960079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.960141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.960381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.960442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.960740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.960799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.961099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.961162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.961424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.961486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.961798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.961861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.948 [2024-11-18 08:09:55.962142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.948 [2024-11-18 08:09:55.962204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.948 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.962470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.962582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.962881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.962944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.963209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.963272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.963575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.963634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.963939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.964002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.964252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.964316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.964585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.964643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.964915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.964974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.965196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.965259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.965427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.965503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.965727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.965802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.966058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.966117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.966367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.966429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.966746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.966824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.967089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.967155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.967424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.967487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.967751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.967836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.968122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.968184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.968426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.968512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.968857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.968920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.969168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.969230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.969473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.969568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.969765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.969824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.970130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.970193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.970453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.970533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.970705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.970768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.971060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.971122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.971408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.971481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.971726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.971790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.972050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.972112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.972414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.972476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.972790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.972852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.973056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.973118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.973340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.973403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.973671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.973736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.973985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.974048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.974333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.974396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.949 [2024-11-18 08:09:55.974717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.949 [2024-11-18 08:09:55.974782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.949 qpair failed and we were unable to recover it. 00:36:02.950 [2024-11-18 08:09:55.974976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.950 [2024-11-18 08:09:55.975039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.950 qpair failed and we were unable to recover it. 00:36:02.950 [2024-11-18 08:09:55.975232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.950 [2024-11-18 08:09:55.975294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.950 qpair failed and we were unable to recover it. 00:36:02.950 [2024-11-18 08:09:55.975557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.950 [2024-11-18 08:09:55.975624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.950 qpair failed and we were unable to recover it. 00:36:02.950 [2024-11-18 08:09:55.975944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.950 [2024-11-18 08:09:55.976008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.950 qpair failed and we were unable to recover it. 00:36:02.950 [2024-11-18 08:09:55.976252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.950 [2024-11-18 08:09:55.976315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.950 qpair failed and we were unable to recover it. 00:36:02.950 [2024-11-18 08:09:55.976531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.950 [2024-11-18 08:09:55.976596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.950 qpair failed and we were unable to recover it. 00:36:02.950 [2024-11-18 08:09:55.976911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.950 [2024-11-18 08:09:55.976975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.950 qpair failed and we were unable to recover it. 00:36:02.950 [2024-11-18 08:09:55.977238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.950 [2024-11-18 08:09:55.977301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.950 qpair failed and we were unable to recover it. 00:36:02.950 [2024-11-18 08:09:55.977523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.950 [2024-11-18 08:09:55.977587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.950 qpair failed and we were unable to recover it. 00:36:02.950 [2024-11-18 08:09:55.977797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.950 [2024-11-18 08:09:55.977862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.950 qpair failed and we were unable to recover it. 00:36:02.950 [2024-11-18 08:09:55.978119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.950 [2024-11-18 08:09:55.978182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.950 qpair failed and we were unable to recover it. 00:36:02.950 [2024-11-18 08:09:55.978416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.950 [2024-11-18 08:09:55.978479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.950 qpair failed and we were unable to recover it. 00:36:02.950 [2024-11-18 08:09:55.978782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.950 [2024-11-18 08:09:55.978846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.950 qpair failed and we were unable to recover it. 00:36:02.950 [2024-11-18 08:09:55.979155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.950 [2024-11-18 08:09:55.979219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.950 qpair failed and we were unable to recover it. 00:36:02.950 [2024-11-18 08:09:55.979481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.950 [2024-11-18 08:09:55.979565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.950 qpair failed and we were unable to recover it. 00:36:02.950 [2024-11-18 08:09:55.979827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.950 [2024-11-18 08:09:55.979890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.950 qpair failed and we were unable to recover it. 00:36:02.950 [2024-11-18 08:09:55.980177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.950 [2024-11-18 08:09:55.980239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.950 qpair failed and we were unable to recover it. 00:36:02.950 [2024-11-18 08:09:55.980565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.950 [2024-11-18 08:09:55.980630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.950 qpair failed and we were unable to recover it. 00:36:02.950 [2024-11-18 08:09:55.980886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.950 [2024-11-18 08:09:55.980949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.950 qpair failed and we were unable to recover it. 00:36:02.950 [2024-11-18 08:09:55.981161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.950 [2024-11-18 08:09:55.981223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.950 qpair failed and we were unable to recover it. 00:36:02.950 [2024-11-18 08:09:55.981465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.950 [2024-11-18 08:09:55.981544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:02.950 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.981790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.981853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.982103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.982168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.982413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.982476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.982779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.982845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.983094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.983157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.983399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.983461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.983691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.983756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.983991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.984053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.984309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.984372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.984615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.984681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.984941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.985004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.985257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.985319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.985534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.985599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.985897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.985959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.986260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.986323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.986628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.986694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.986943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.987006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.987272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.987335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.987554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.987619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.987874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.987937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.988195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.988258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.988554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.988618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.988860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.988925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.226 qpair failed and we were unable to recover it. 00:36:03.226 [2024-11-18 08:09:55.989216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.226 [2024-11-18 08:09:55.989280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.989476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.989560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.989766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.989829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.990074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.990137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.990376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.990441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.990705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.990771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.991062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.991126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.991388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.991450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.991762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.991826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.992064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.992129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.992378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.992440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.992725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.992791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.993053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.993115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.993413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.993486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.993767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.993832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.994088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.994151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.994442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.994524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.994793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.994859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.995105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.995168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.995406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.995469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.995717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.995781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.996079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.996142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.996349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.996412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.996724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.996789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.997078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.997141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.997381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.997443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.997652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.997717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.997956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.998020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.998314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.998378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.998597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.998661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.998951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.999014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.999213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.999276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.999484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.999565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:55.999789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:55.999852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:56.000110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:56.000174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:56.000470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:56.000588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:56.000881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:56.000945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:56.001210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.227 [2024-11-18 08:09:56.001272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.227 qpair failed and we were unable to recover it. 00:36:03.227 [2024-11-18 08:09:56.001534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.001599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.001909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.001972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.002259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.002331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.002578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.002643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.002879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.002942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.003236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.003298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.003543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.003609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.003790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.003853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.004135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.004197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.004440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.004517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.004723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.004786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.005078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.005140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.005378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.005443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.005720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.005785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.005986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.006048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.006227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.006292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.006552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.006617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.006875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.006940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.007167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.007231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.007453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.007531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.007793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.007855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.008094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.008160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.008367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.008429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.008718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.008783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.009026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.009091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.009354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.009417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.009705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.009770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.010012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.010074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.010278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.010343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.010639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.010706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.010976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.011039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.011293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.011356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.011582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.011647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.011906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.011968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.012231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.012293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.012585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.012650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.012873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.012937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.013166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.228 [2024-11-18 08:09:56.013229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.228 qpair failed and we were unable to recover it. 00:36:03.228 [2024-11-18 08:09:56.013520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.013584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.013844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.013907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.014150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.014213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.014458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.014543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.014807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.014870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.015077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.015141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.015430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.015512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.015751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.015814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.016004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.016067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.016320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.016383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.016686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.016751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.016985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.017048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.017335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.017397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.017663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.017729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.017971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.018035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.018243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.018304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.018546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.018610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.018839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.018904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.019115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.019178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.019391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.019454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.019697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.019762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.020004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.020066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.020350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.020412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.020669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.020734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.020962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.021024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.021266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.021328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.021619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.021684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.021939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.022003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.022242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.022305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.022556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.022620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.022912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.022975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.023181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.023247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.023514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.023588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.023796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.023862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.024164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.024228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.024478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.024574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.024859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.024922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.025214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.229 [2024-11-18 08:09:56.025277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.229 qpair failed and we were unable to recover it. 00:36:03.229 [2024-11-18 08:09:56.025537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.025602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.025887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.025951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.026220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.026282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.026528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.026594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.026849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.026912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.027108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.027171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.027398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.027461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.027725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.027789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.028046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.028109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.028406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.028469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.028784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.028848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.029094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.029156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.029374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.029437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.029693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.029758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.030048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.030111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.030316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.030382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.030696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.030762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.031015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.031079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.031299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.031362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.031652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.031716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.031931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.031997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.032297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.032371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.032674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.032739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.033049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.033112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.033350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.033414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.033712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.033776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.033972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.034037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.034297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.034359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.034587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.034652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.034936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.035000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.035172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.035235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.035481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.035567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.035801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.035865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.036151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.230 [2024-11-18 08:09:56.036214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.230 qpair failed and we were unable to recover it. 00:36:03.230 [2024-11-18 08:09:56.036461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.036563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.036822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.036887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.037139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.037201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.037455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.037539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.037850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.037914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.038210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.038273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.038564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.038629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.038834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.038896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.039183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.039246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.039468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.039549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.039761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.039824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.040080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.040144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.040359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.040422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.040728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.040792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.041085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.041159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.041427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.041506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.041754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.041817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.042106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.042169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.042454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.042554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.042848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.042912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.043136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.043199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.043408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.043474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.043758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.043822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.044091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.044155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.044390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.044453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.044717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.044781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.045005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.045068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.045315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.045377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.045663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.045729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.045937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.046001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.046292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.046355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.046579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.046645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.046897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.046962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.047190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.047252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.047548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.231 [2024-11-18 08:09:56.047612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.231 qpair failed and we were unable to recover it. 00:36:03.231 [2024-11-18 08:09:56.047871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.047934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.048185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.048247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.048511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.048577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.048863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.048927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.049231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.049293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.049537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.049603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.049844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.049908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.050162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.050225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.050545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.050610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.050901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.050964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.051211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.051274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.051563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.051628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.051858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.051921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.052173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.052237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.052506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.052570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.052871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.052934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.053198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.053261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.053561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.053625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.053920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.053984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.054230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.054294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.054517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.054581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.054870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.054932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.055213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.055276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.055488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.055570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.055849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.055912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.056162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.056225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.056423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.056486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.056755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.056821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.057065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.057128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.057377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.057442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.057685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.057750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.058037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.058100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.058371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.058435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.058739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.058804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.059117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.059180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.059376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.059440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.059671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.059735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.059983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.232 [2024-11-18 08:09:56.060045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.232 qpair failed and we were unable to recover it. 00:36:03.232 [2024-11-18 08:09:56.060314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.060377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.060688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.060753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.061057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.061119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.061326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.061389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.061606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.061671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.061926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.061989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.062240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.062303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.062544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.062610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.062917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.062979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.063233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.063306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.063517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.063585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.063847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.063910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.064159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.064222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.064477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.064559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.064823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.064886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.065132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.065195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.065434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.065512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.065775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.065839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.066079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.066141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.066434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.066529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.066805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.066868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.067162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.067224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.067434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.067516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.067822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.067885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.068184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.068246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.068486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.068568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.068812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.068875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.069167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.069229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.069430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.069510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.069755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.069818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.070108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.070170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.070468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.070569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.070807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.070871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.071121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.071184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.071486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.071568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.071840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.071904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.072111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.072185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.072379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.233 [2024-11-18 08:09:56.072441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.233 qpair failed and we were unable to recover it. 00:36:03.233 [2024-11-18 08:09:56.072700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.072764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.073002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.073066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.073310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.073373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.073575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.073640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.073900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.073962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.074249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.074312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.074598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.074663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.074912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.074974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.075256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.075318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.075569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.075634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.075845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.075908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.076191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.076254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.076519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.076585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.076883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.076946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.077190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.077253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.077543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.077609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.077824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.077889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.078135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.078198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.078459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.078556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.078847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.078910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.079115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.079178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.079419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.079482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.079701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.079763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.079972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.080035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.080272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.080337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.080594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.080660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.080933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.080997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.081204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.081266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.081557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.081622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.081841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.081904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.082191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.082253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.082543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.082608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.082911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.082974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.083220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.083282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.083523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.083588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.083782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.083845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.084049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.084111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.084392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.084455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.234 qpair failed and we were unable to recover it. 00:36:03.234 [2024-11-18 08:09:56.084738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.234 [2024-11-18 08:09:56.084802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.085062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.085126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.085312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.085375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.085590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.085655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.085943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.086005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.086253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.086316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.086560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.086647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.086845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.086908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.087116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.087179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.087363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.087430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.087707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.087771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.087993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.088056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.088296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.088359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.088599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.088666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.088931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.088994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.089245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.089308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.089604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.089670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.089925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.089987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.090282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.090345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.090609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.090673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.090971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.091034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.091244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.091307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.091558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.091625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.091923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.091987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.092239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.092302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.092572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.092637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.092861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.092924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.093164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.093228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.093525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.093601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.093820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.093884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.094140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.094204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.094455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.094553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.094841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.094904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.095194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.235 [2024-11-18 08:09:56.095257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.235 qpair failed and we were unable to recover it. 00:36:03.235 [2024-11-18 08:09:56.095517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.095581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.095768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.095831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.096082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.096146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.096388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.096454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.096763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.096828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.097076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.097140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.097402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.097464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.097741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.097805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.098019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.098084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.098324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.098389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.098630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.098698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.098901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.098968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.099265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.099328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.099551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.099620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.099873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.099937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.100197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.100261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.100570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.100621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.100850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.100897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.101038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.101087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.101281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.101328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.101523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.101571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.101774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.101829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.102065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.102112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.102269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.102316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.102504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.102553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.102731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.102778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.102949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.102997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.103220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.103284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.103556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.103603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.103863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.103925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.104157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.104227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.104482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.104577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.104772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.104819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.105049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.105097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.105343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.105406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.105678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.105726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.105911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.105960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.106150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.106198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.236 qpair failed and we were unable to recover it. 00:36:03.236 [2024-11-18 08:09:56.106369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.236 [2024-11-18 08:09:56.106433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.106652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.106700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.106894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.106940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.107123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.107186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.107390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.107437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.107616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.107664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.107855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.107902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.108058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.108105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.108306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.108352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.108516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.108565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.108709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.108764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.108944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.108992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.109174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.109250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.109518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.109589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.109807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.109869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.110143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.110206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.110441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.110541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.110749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.110796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.111026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.111073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.111255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.111302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.111514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.111563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.111791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.111839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.112053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.112114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.112407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.112469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.112716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.112765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.112989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.113036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.113287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.113334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.113505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.113555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.113741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.113790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.114009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.114056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.114207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.114254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.114415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.114462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.114638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.114688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.114917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.114964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.115154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.115201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.115387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.115435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.115691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.115739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.115884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.115931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.237 [2024-11-18 08:09:56.116127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.237 [2024-11-18 08:09:56.116174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.237 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.116355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.116402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.116576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.116627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.116799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.116875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.117173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.117245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.117544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.117592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.117791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.117838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.118064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.118126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.118421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.118484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.118691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.118739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.118935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.118983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.119127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.119203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.119392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.119456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.119784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.119864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.120069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.120135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.120369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.120433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.120692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.120740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.120990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.121053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.121322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.121385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.121641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.121707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.121980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.122027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.122255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.122318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.122627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.122693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.122950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.123015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.123266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.123329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.123587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.123651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.123958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.124006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.124165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.124212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.124409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.124456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.124648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.124696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.124891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.124938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.125167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.125213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.125371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.125417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.125581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.125630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.125814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.125861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.126025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.126073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.126272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.126320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.126485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.126570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.238 [2024-11-18 08:09:56.126909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.238 [2024-11-18 08:09:56.126999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.238 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.127355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.127445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.127787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.127868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.128146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.128214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.128518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.128586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.128805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.128868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.129095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.129159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.129401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.129467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.129760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.129849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.130199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.130286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.130590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.130658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.130980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.131068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.131369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.131457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.131838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.131926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.132239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.132306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.132612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.132682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.132985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.133051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.133316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.133396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.133772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.133863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.134189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.134278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.134617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.134687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.134929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.134994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.135188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.135248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.135548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.135616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.135865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.135928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.136213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.136276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.136617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.136708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.136997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.137064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.137272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.137338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.137662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.137742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.138033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.138098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.138405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.138517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.138864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.138935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.139244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.139306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.139566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.139634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.139886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.139950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.140236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.140283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.140552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.140643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.140945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.239 [2024-11-18 08:09:56.141012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.239 qpair failed and we were unable to recover it. 00:36:03.239 [2024-11-18 08:09:56.141242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.141309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.141649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.141717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.141995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.142081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.142436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.142513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.142748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.142818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.143080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.143145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.143377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.143440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.143757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.143821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.144128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.144191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.144449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.144548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.144868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.144955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.145265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.145350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.145687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.145779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.146086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.146172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.146478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.146563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.146842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.146936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.147268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.147332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.147563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.147630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.147921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.147969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.148244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.148307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.148617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.148704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.149017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.149106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.149456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.149577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.149888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.149975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.150305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.150370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.150638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.150729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.151078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.151171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.151523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.151590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.151848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.151912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.152167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.152230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.152449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.152526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.152752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.152817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.153084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.240 [2024-11-18 08:09:56.153172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.240 qpair failed and we were unable to recover it. 00:36:03.240 [2024-11-18 08:09:56.153540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.153631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.153993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.154082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.154400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.154506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.154872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.154961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.155312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.155404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.155717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.155785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.156078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.156142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.156431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.156510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.156806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.156869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.157175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.157264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.157623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.157729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.158039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.158127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.158462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.158583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.158941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.159027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.159376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.159464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.159770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.159862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.160137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.160202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.160522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.160588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.160798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.160861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.161067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.161130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.161380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.161443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.161721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.161810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.162127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.162212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.162468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.162580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.162898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.162984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.163306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.163409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.163793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.163881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.164202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.164296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.164627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.164694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.164958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.165022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.165315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.165379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.165628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.165694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.165944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.166030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.166376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.166463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.166781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.166869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.167171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.167243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.167602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.167691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.168002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.168090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.241 qpair failed and we were unable to recover it. 00:36:03.241 [2024-11-18 08:09:56.168447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.241 [2024-11-18 08:09:56.168584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.168936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.169002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.169302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.169365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.169573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.169638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.169921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.169985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.170273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.170357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.170720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.170808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.171126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.171215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.171576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.171669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.172043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.172130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.172445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.172550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.172829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.172921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.173242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.173307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.173559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.173625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.173850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.173926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.174194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.174257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.174559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.174647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.174972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.175059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.175403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.175507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.175814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.175898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.176266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.176355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.176748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.176837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.177184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.177275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.177584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.177673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.177989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.178077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.178392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.178469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.178852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.178941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.179249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.179335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.179673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.179767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.180033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.180101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.180403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.180468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.180806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.180869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.181147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.181211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.181526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.181616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.181924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.182009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.182355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.182441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.182813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.182903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.183247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.183336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.242 qpair failed and we were unable to recover it. 00:36:03.242 [2024-11-18 08:09:56.183715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.242 [2024-11-18 08:09:56.183810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.184154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.184219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.184550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.184616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.184861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.184936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.185199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.185261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.185548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.185637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.185957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.186045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.186390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.186478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.186855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.186944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.187300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.187388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.187777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.187866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.188207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.188275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.188537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.188618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.188920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.188984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.189234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.189297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.189547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.189615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.189952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.190040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.190406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.190511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.190903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.190993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.191352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.191440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.191782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.191870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.192159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.192248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.192487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.192586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.192861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.192926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.193142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.193205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.193516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.193583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.193837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.193905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.194254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.194341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.194720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.194808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.195104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.195180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.195532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.195619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.195939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.196028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.196343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.196436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.196778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.196848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.197090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.197154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.197370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.197433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.197756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.197821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.198020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.198083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.243 [2024-11-18 08:09:56.198345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.243 [2024-11-18 08:09:56.198431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.243 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.198795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.198884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.199230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.199317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.199624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.199714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.200030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.200117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.200508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.200598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.200991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.201091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.201405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.201474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.201765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.201831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.202080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.202144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.202395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.202460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.202765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.202829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.203075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.203141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.203393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.203459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.203731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.203796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.204045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.204110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.204367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.204430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.204703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.204771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.205055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.205120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.205415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.205478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.205787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.205852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.206097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.206165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.206409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.206474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.206793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.206858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.207112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.207176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.207419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.207483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.207757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.207825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.208050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.208112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.208358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.208421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.208665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.208731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.209034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.209098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.209312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.209376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.209741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.209807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.210068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.244 [2024-11-18 08:09:56.210134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.244 qpair failed and we were unable to recover it. 00:36:03.244 [2024-11-18 08:09:56.210375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.210441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.210725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.210791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.211050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.211115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.211388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.211455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.211774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.211840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.212106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.212170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.212382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.212446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.212684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.212749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.212954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.213017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.213257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.213321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.213531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.213598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.213856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.213920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.214215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.214291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.214580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.214646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.214899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.214963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.215215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.215279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.215488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.215567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.215809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.215873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.216088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.216151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.216401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.216467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.216739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.216803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.217094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.217159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.217419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.217483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.217716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.217780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.218032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.218097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.218355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.218419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.218685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.218750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.218949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.219013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.219278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.219342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.219602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.219671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.219926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.219990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.220236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.220300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.220545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.220611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.220900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.220964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.221256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.221320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.221595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.221656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.221884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.221943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.222140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.245 [2024-11-18 08:09:56.222200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.245 qpair failed and we were unable to recover it. 00:36:03.245 [2024-11-18 08:09:56.222437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.222507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.222760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.222820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.223054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.223116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.223307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.223368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.223642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.223703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.223905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.223964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.224167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.224227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.224411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.224469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.224713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.224772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.224999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.225061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.225243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.225303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.225515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.225576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.225753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.225813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.226042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.226102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.226325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.226415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.226689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.226749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.226976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.227036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.227271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.227331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.227566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.227627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.227856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.227917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.228153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.228214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.228444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.228522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.228802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.228861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.229053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.229113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.229352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.229411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.229656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.229717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.229947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.230006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.230241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.230300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.230579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.230640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.230908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.230967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.231254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.231312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.231561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.231622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.231840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.231902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.232197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.232256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.232535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.232596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.232808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.232867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.233100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.233158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.246 [2024-11-18 08:09:56.233381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.246 [2024-11-18 08:09:56.233441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.246 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.233688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.233749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.234021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.234080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.234277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.234336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.234564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.234626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.234900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.234958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.235155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.235214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.235450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.235526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.235766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.235827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.236069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.236129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.236351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.236411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.236680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.236740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.237030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.237089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.237279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.237338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.237577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.237637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.237839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.237898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.238121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.238181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.238410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.238479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.238739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.238798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.238995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.239057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.239284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.239344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.239566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.239627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.239859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.239919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.240187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.240247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.240484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.240561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.240827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.240887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.241088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.241147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.241380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.241441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.241677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.241738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.241967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.242026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.242323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.242381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.242728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.242790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.243074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.243134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.243315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.243374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.243596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.243657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.243930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.243990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.244275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.244334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.244567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.247 [2024-11-18 08:09:56.244628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.247 qpair failed and we were unable to recover it. 00:36:03.247 [2024-11-18 08:09:56.244857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.244916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.245134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.245196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.245441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.245514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.245748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.245807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.246068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.246127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.246331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.246391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.246649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.246711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.246948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.247008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.247275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.247335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.247567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.247628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.247892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.247951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.248225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.248285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.248521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.248582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.248793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.248852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.249048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.249109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.249339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.249399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.249727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.249788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.250110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.250170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.250479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.250554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.250800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.250871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.251153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.251213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.251541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.251602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.251840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.251900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.252168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.252228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.252512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.252573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.252847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.252906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.253211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.253271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.253546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.253606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.253835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.253894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.254170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.254229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.254510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.254572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.254761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.254824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.255066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.255125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.255432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.255523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.255760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.255819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.256047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.256107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.256378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.256437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.256644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.248 [2024-11-18 08:09:56.256705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.248 qpair failed and we were unable to recover it. 00:36:03.248 [2024-11-18 08:09:56.256931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.256991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.257234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.257296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.257580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.257640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.257880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.257939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.258217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.258277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.258516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.258576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.258854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.258913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.259100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.259159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.259412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.259471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.259763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.259823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.260091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.260150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.260422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.260480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.260736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.260796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.260980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.261042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.261272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.261331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.261647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.261709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.262027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.262086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.262359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.262418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.262721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.262781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.263044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.263104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.263299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.263360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.263634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.263712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.264013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.264073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.264305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.264364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.264569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.264631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.264859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.264920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.265230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.265290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.265534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.265596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.265875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.265934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.266248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.266307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.266589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.266650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.266884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.266943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.267221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.249 [2024-11-18 08:09:56.267280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.249 qpair failed and we were unable to recover it. 00:36:03.249 [2024-11-18 08:09:56.267521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.267581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.267810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.267870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.268190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.268249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.268519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.268579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.268848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.268907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.269218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.269278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.269599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.269659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.269983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.270042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.270316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.270376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.270652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.270711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.271046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.271108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.271376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.271435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.271724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.271784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.272017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.272076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.272358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.272417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.272728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.272787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.273018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.273077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.273358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.273417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.273700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.273759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.274033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.274092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.274337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.274396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.274709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.274769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.275096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.275154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.275429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.275487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.275742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.275801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.276061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.276121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.276395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.276454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.276738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.276796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.277116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.277184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.277525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.277586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.277790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.277849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.278064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.278123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.278322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.278381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.278646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.278706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.279015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.279074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.279377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.279437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.279682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.250 [2024-11-18 08:09:56.279742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.250 qpair failed and we were unable to recover it. 00:36:03.250 [2024-11-18 08:09:56.280081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.280140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.280451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.280522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.280802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.280862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.281180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.281239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.281461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.281532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.281823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.281882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.282161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.282219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.282452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.282524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.282762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.282821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.283125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.283185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.283536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.283597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.283926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.283985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.284255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.284316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.284521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.284581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.284833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.284892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.285159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.285221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.285466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.285539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.285768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.285827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.286073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.286134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.286367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.286427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.286708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.286768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.287087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.287147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.287386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.287464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.287689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.287755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.288017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.288082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.288356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.288420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.288672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.288736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.288979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.289043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.289287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.289351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.289652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.289718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.290020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.290084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.290347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.290421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.290676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.290740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.291035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.291098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.291318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.291382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.291675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.291741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.291987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.292052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.292280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.251 [2024-11-18 08:09:56.292344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.251 qpair failed and we were unable to recover it. 00:36:03.251 [2024-11-18 08:09:56.292608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.292674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.292932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.292995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.293191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.293254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.293528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.293594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.293895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.293958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.294158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.294221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.294470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.294547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.294829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.294895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.295117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.295183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.295477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.295558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.295762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.295828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.296091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.296156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.296438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.296514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.296767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.296831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.297087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.297150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.297400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.297467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.297782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.297847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.298102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.298167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.298417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.298484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.298793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.298858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.299122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.299187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.299441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.299534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.252 [2024-11-18 08:09:56.299834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.252 [2024-11-18 08:09:56.299897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.252 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.300204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.300268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.300525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.300591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.300781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.300844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.301127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.301190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.301404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.301468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.301748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.301812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.302002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.302065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.302351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.302416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.302684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.302750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.303015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.303078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.303281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.303358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.303573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.303641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.303906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.303971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.304225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.304289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.304582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.304649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.304886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.304950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.305232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.305296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.305543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.305610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.305901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.305965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.306174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.306237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.306547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.306613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.306869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.306933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.307173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.307236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.307438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.307534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.307816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.307880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.526 qpair failed and we were unable to recover it. 00:36:03.526 [2024-11-18 08:09:56.308057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.526 [2024-11-18 08:09:56.308120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.308385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.308449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.308703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.308767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.309016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.309080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.309325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.309389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.309654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.309719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.309982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.310046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.310290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.310354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.310620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.310686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.310937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.311001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.311241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.311306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.311600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.311666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.311878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.311943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.312186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.312252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.312519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.312585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.312849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.312913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.313217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.313280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.313563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.313629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.313922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.313986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.314281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.314345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.314648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.314713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.315017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.315081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.315325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.315389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.315666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.315734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.315939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.316003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.316309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.316382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.316642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.316709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.317004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.317068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.317338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.317401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.317611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.317677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.317963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.318028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.318293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.318356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.318604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.318668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.318907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.318970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.319261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.319324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.319592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.319658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.319908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.319973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.320206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.527 [2024-11-18 08:09:56.320271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.527 qpair failed and we were unable to recover it. 00:36:03.527 [2024-11-18 08:09:56.320561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.320627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.320889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.320954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.321218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.321282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.321588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.321653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.321947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.322011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.322312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.322376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.322621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.322687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.322892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.322960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.323264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.323328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.323629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.323695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.323950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.324016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.324280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.324344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.324648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.324713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.325015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.325080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.325379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.325444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.325765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.325829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.326075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.326139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.326387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.326452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.326716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.326777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.327076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.327137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.327431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.327520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.327712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.327772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.328004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.328064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.328333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.328394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.328651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.328711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.328987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.329047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.329277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.329337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.329531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.329608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.329844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.329904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.330142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.330202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.330484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.330559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.330830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.330890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.331090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.331152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.331394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.331456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.331707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.331767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.332040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.332098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.332331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.528 [2024-11-18 08:09:56.332390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.528 qpair failed and we were unable to recover it. 00:36:03.528 [2024-11-18 08:09:56.332678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.332739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.333002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.333062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.333250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.333310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.333538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.333599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.333856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.333917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.334157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.334216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.334412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.334473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.334693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.334752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.335014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.335073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.335296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.335356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.335574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.335634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.335893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.335952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.336216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.336276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.336466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.336542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.336774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.336833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.337067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.337127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.337366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.337425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.337722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.337784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.338005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.338065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.338304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.338362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.338581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.338641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.338841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.338902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.339148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.339208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.339467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.339550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.339759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.339820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.340050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.340111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.340403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.340467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.340779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.340843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.341146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.341210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.341395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.341462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.341747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.341822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.342073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.342137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.342395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.342459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.342693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.342756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.342983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.343049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.343290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.343355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.343602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.343668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.343881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.529 [2024-11-18 08:09:56.343945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.529 qpair failed and we were unable to recover it. 00:36:03.529 [2024-11-18 08:09:56.344227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.344292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.344554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.344619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.344857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.344921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.345211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.345275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.345482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.345573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.345762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.345825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.346140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.346205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.346515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.346582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.346827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.346890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.347175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.347239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.347450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.347543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.347799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.347863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.348055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.348119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.348335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.348399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.348664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.348730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.348936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.349000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.349262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.349326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.349579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.349644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.349931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.349996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.350257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.350321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.350565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.350630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.350827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.350892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.351091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.351157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.351402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.351467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.351745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.351810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.352053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.352117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.352311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.352378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.352587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.352653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.352902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.352966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.353260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.353325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.353587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.353653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.353884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.530 [2024-11-18 08:09:56.353949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.530 qpair failed and we were unable to recover it. 00:36:03.530 [2024-11-18 08:09:56.354230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.354305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.354598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.354664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.354965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.355029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.355322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.355386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.355609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.355676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.355891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.355955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.356247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.356311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.356565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.356632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.356868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.356931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.357182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.357246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.357513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.357578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.357832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.357896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.358079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.358143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.358422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.358487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.358715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.358780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.359083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.359148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.359382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.359446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.359734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.359799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.360035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.360099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.360359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.360423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.360712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.360778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.360981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.361045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.361297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.361360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.361651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.361718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.362011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.362074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.362296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.362360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.362587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.362653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.362883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.362947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.363207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.363273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.363534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.363598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.363799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.363863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.364158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.364222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.364523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.364588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.364792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.364858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.365068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.365132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.365366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.365429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.365690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.365755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.531 [2024-11-18 08:09:56.366007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.531 [2024-11-18 08:09:56.366071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.531 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.366367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.366431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.366709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.366775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.367067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.367132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.367389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.367452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.367739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.367806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.368037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.368101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.368300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.368364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.368565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.368632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.368919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.368982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.369216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.369280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.369534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.369602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.369858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.369923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.370224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.370288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.370593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.370660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.370909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.370974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.371217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.371281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.371582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.371648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.371860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.371927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.372194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.372259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.372516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.372582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.372840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.372905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.373191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.373255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.373548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.373615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.373859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.373923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.374216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.374280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.374527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.374594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.374816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.374881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.375102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.375171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.375458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.375551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.375846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.375920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.376183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.376248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.376511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.376581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.376801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.376866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.377057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.377122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.377364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.377429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.377709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.377773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.378037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.532 [2024-11-18 08:09:56.378101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.532 qpair failed and we were unable to recover it. 00:36:03.532 [2024-11-18 08:09:56.378316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.378381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.378700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.378767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.379019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.379083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.379341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.379405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.379731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.379797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.380047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.380111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.380335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.380399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.380714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.380780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.381034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.381098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.381386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.381449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.381709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.381775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.382032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.382097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.382341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.382408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.382623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.382690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.382887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.382951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.383149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.383215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.383424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.383488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.383764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.383828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.384018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.384082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.384384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.384450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.384752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.384817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.385118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.385182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.385396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.385463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.385746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.385811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.386056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.386120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.386411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.386475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.386734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.386798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.387044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.387108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.387389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.387454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.387765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.387829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.388080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.388147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.388441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.388522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.388729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.388803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.389111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.389176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.389417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.389480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.389750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.389814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.390071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.390134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.533 [2024-11-18 08:09:56.390419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.533 [2024-11-18 08:09:56.390483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.533 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.390757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.390822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.391080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.391146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.391341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.391405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.391725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.391791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.391999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.392062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.392350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.392413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.392675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.392742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.392998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.393063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.393281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.393345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.393558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.393626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.393874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.393939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.394134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.394196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.394486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.394564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.394851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.394916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.395123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.395185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.395433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.395530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.395822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.395886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.396092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.396155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.396441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.396523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.396777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.396840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.397090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.397153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.397452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.397532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.397782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.397847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.398085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.398148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.398378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.398441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.398747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.398813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.399018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.399084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.399330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.399394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.399617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.399682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.399978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.400042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.400281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.400344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.400592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.400657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.400910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.400975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.401226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.401289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.401512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.401587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.401826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.401890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.402104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.402167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.534 qpair failed and we were unable to recover it. 00:36:03.534 [2024-11-18 08:09:56.402473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.534 [2024-11-18 08:09:56.402551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.402756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.402821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.403072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.403135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.403434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.403514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.403762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.403828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.404075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.404138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.404380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.404444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.404773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.404838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.405079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.405143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.405445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.405528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.405820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.405884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.406137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.406200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.406446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.406524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.406746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.406811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.407068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.407131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.407354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.407419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.407692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.407757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.408005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.408073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.408294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.408358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.408652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.408719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.408979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.409043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.409260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.409325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.409635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.409702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.409985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.410050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.410304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.410368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.410606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.410671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.410956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.411019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.411250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.411313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.411523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.411589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.411795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.411859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.412121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.412185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.412436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.412516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.412767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.412831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.535 [2024-11-18 08:09:56.413078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.535 [2024-11-18 08:09:56.413143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.535 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.413392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.413458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.413729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.413794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.414044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.414107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.414305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.414382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.414655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.414720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.414961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.415024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.415314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.415377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.415598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.415665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.415962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.416026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.416234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.416298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.416559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.416625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.416831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.416898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.417167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.417232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.417526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.417593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.417805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.417871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.418127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.418191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.418482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.418563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.418774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.418841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.419088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.419153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.419376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.419439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.419695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.419759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.420052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.420116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.420359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.420423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.420736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.420801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.421043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.421107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.421397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.421461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.421725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.421789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.422049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.422113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.422316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.422381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.422634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.422701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.422957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.423023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.423276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.423339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.423599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.423665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.423871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.423935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.424181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.424260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.424508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.424569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.424840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.536 [2024-11-18 08:09:56.424898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.536 qpair failed and we were unable to recover it. 00:36:03.536 [2024-11-18 08:09:56.425185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.425243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.425472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.425550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.425774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.425832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.426101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.426160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.426429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.426488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.426708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.426767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.426933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.427003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.427311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.427375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.427676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.427741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.428039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.428102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.428357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.428420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.428734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.428800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.429060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.429124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.429359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.429422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.429704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.429769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.430022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.430088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.430336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.430401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.430663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.430730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.430954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.431018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.431304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.431368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.431592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.431659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.431960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.432025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.432326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.432390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.432655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.432720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.432972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.433036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.433331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.433395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.433662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.433727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.433928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.433993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.434221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.434286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.434530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.434596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.434808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.434875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.435142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.435206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.435465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.435565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.435867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.435931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.436142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.436205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.436447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.436529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.436827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.436892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.537 [2024-11-18 08:09:56.437098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.537 [2024-11-18 08:09:56.437162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.537 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.437418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.437481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.437785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.437849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.438096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.438162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.438350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.438414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.438658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.438723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.439013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.439077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.439375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.439439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.439750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.439815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.440129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.440203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.440465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.440544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.440838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.440902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.441107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.441171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.441461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.441539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.441788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.441852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.442152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.442216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.442467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.442548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.442841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.442905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.443104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.443169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.443460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.443550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.443747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.443810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.444063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.444127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.444371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.444435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.444654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.444720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.444986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.445050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.445317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.445381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.445680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.445746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.446043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.446106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.446414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.446477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.446744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.446808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.447017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.447084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.447376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.447440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.447797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161d630 is same with the state(6) to be set 00:36:03.538 [2024-11-18 08:09:56.448236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.448334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.448651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.448725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.448959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.449025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.449288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.449368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.449623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.449690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.538 [2024-11-18 08:09:56.449978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.538 [2024-11-18 08:09:56.450045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.538 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.450294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.450358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.450609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.450676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.450933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.450998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.451288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.451351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.451638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.451706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.451903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.451970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.452222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.452286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.452576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.452642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.452895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.452964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.453160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.453227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.453522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.453588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.453847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.453913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.454148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.454212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.454444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.454534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.454804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.454869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.455124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.455188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.455451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.455535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.455786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.455851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.456106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.456171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.456473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.456552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.456838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.456902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.457117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.457181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.457375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.457439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.457671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.457735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.458026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.458103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.458353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.458419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.458734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.458800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.459044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.459109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.459403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.459468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.459771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.459838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.460144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.460209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.460511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.460577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.460792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.460861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.461115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.461181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.461479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.461562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.461768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.461835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.462091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.462156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.539 qpair failed and we were unable to recover it. 00:36:03.539 [2024-11-18 08:09:56.462413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.539 [2024-11-18 08:09:56.462477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.462799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.462864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.463108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.463173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.463485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.463565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.463816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.463880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.464148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.464213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.464458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.464538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.464793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.464857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.465156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.465222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.465467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.465549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.465841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.465905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.466171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.466235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.466506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.466575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.466804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.466872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.467188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.467255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.467562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.467628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.467879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.467945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.468195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.468261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.468521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.468587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.468871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.468936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.469222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.469287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.469541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.469607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.469859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.469924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.470218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.470284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.470542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.470608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.470891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.470956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.471163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.471228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.471540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.471618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.471859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.471926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.472229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.472294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.472588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.472656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.472910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.472975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.473227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.473292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.540 [2024-11-18 08:09:56.473566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.540 [2024-11-18 08:09:56.473632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.540 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.473830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.473894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.474151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.474215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.474459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.474537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.474748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.474813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.475102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.475165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.475388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.475452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.475712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.475777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.475993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.476058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.476349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.476413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.476655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.476724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.476971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.477037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.477294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.477359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.477650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.477717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.478018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.478083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.478330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.478394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.478659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.478725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.478964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.479031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.479280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.479344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.479554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.479624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.479886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.479952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.480166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.480234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.480517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.480583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.480833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.480901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.481147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.481214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.481413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.481480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.481803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.481868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.482171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.482235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.482532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.482599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.482889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.482954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.483237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.483303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.483553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.483622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.483835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.483900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.484161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.484225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.484409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.484487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.484785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.484850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.485060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.485124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.485327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.485393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.541 qpair failed and we were unable to recover it. 00:36:03.541 [2024-11-18 08:09:56.485681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.541 [2024-11-18 08:09:56.485748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.485964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.486029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.486317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.486382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.486643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.486711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.487012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.487078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.487336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.487400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.487665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.487731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.487990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.488055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.488334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.488399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.488607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.488673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.488973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.489038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.489244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.489308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.489616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.489682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.489980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.490046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.490232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.490296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.490589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.490655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.490871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.490912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.491075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.491116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.491375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.491440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.491673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.491714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.491895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.491958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.492215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.492279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.492569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.492611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.492760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.492834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.493132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.493197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.493443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.493543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.493685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.493726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.493986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.494051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.494300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.494371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.494644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.494686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.494873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.494937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.495156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.495224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.495537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.495581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.495749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.495791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.496046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.496111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.496369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.496433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.496678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.496727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.496915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.542 [2024-11-18 08:09:56.496980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.542 qpair failed and we were unable to recover it. 00:36:03.542 [2024-11-18 08:09:56.497210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.497277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.497530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.497572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.497710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.497752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.497920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.497961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.498271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.498312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.498432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.498472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.498706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.498775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.499039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.499104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.499359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.499423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.499686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.499752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.500050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.500115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.500362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.500426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.500695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.500763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.501034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.501100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.501370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.501435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.501729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.501797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.502019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.502086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.502379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.502445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.502777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.502844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.503103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.503144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.503309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.503351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.503549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.503620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.503837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.503901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.504101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.504168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.504409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.504475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.504795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.504860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.505119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.505186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.505433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.505518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.505819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.505860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.506020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.506061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.506220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.506262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.506566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.506633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.506930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.506996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.507247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.507316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.507611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.507677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.507927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.507992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.508294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.508360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.543 [2024-11-18 08:09:56.508718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.543 [2024-11-18 08:09:56.508760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.543 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.508916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.509000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.509196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.509261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.509557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.509624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.509880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.509945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.510129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.510194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.510438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.510517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.510812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.510878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.511145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.511211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.511417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.511485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.511766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.511831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.512080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.512144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.512395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.512460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.512722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.512788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.512994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.513059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.513314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.513379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.513665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.513731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.514016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.514080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.514280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.514340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.514480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.514543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.514715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.514798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.515105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.515170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.515423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.515487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.515749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.515814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.516070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.516136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.516436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.516518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.516735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.516802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.517047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.517111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.517380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.517446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.517677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.517742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.517984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.518049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.518300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.518367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.518661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.518728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.518990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.544 [2024-11-18 08:09:56.519055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.544 qpair failed and we were unable to recover it. 00:36:03.544 [2024-11-18 08:09:56.519251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.519316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.519569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.519635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.519857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.519921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.520177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.520242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.520438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.520518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.520728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.520795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.521088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.521154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.521394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.521470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.521707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.521774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.522038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.522103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.522346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.522412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.522719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.522788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.523043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.523110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.523363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.523428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.523701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.523771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.523977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.524044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.524289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.524354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.524647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.524714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.524964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.525027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.525326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.525390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.525703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.525771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.526048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.526114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.526339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.526405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.526694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.526762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.526956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.527021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.527264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.527330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.527646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.527712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.527960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.528027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.528289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.528354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.528553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.528622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.528853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.528919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.529124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.529192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.529450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.529532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.529797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.529863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.530132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.530197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.530414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.530478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.530801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.530867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.545 [2024-11-18 08:09:56.531110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.545 [2024-11-18 08:09:56.531175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.545 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.531393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.531457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.531774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.531840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.532089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.532158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.532463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.532545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.532804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.532868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.533152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.533217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.533519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.533585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.533874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.533939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.534166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.534233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.534456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.534549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.534814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.534879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.535166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.535230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.535506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.535574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.535826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.535890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.536145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.536211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.536519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.536586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.536869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.536934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.537183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.537247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.537517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.537584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.537808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.537874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.538164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.538228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.538437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.538541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.538857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.538924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.539196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.539261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.539466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.539560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.539841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.539905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.540110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.540175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.540422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.540486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.540759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.540824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.541111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.541175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.541366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.541430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.541699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.541765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.542024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.542088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.542343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.542409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.542675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.542740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.542989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.543053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.543302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.546 [2024-11-18 08:09:56.543367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.546 qpair failed and we were unable to recover it. 00:36:03.546 [2024-11-18 08:09:56.543671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.543737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.543986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.544051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.544302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.544369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.544587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.544654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.544950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.545015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.545289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.545353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.545614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.545680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.545981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.546046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.546339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.546404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.546690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.546757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.547054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.547119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.547401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.547467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.547786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.547862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.548110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.548174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.548434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.548512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.548799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.548864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.549125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.549190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.549432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.549512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.549730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.549795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.550043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.550108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.550325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.550389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.550669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.550735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.550977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.551042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.551287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.551353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.551566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.551633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.551844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.551908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.552226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.552291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.552578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.552645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.552858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.552922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.553216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.553280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.553532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.553598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.553803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.553869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.554161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.554224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.554521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.554598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.554852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.554917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.555228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.555292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.555598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.555665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.555919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.547 [2024-11-18 08:09:56.555984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.547 qpair failed and we were unable to recover it. 00:36:03.547 [2024-11-18 08:09:56.556233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.556297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.556611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.556678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.556927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.556993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.557279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.557343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.557555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.557621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.557910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.557976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.558243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.558308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.558607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.558674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.558964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.559028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.559335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.559399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.559685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.559752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.560043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.560107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.560367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.560434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.560712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.560779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.560990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.561065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.561318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.561382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.561599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.561669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.561912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.561977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.562184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.562250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.562536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.562605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.562854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.562918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.563228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.563293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.563598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.563664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.563948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.564012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.564283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.564351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.564611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.564677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.564968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.565033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.565334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.565399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.565693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.565759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.566049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.566114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.566330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.566395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.566667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.566734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.566986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.567050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.567345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.567411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.567672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.567739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.567998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.568067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.568346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.568411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.548 [2024-11-18 08:09:56.568675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.548 [2024-11-18 08:09:56.568742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.548 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.568983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.569048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.569256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.569322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.569571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.569641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.569860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.569925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.570210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.570276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.570586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.570653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.570943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.571007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.571269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.571334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.571562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.571629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.571885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.571950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.572161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.572226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.572445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.572526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.572783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.572848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.573114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.573179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.573380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.573445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.573668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.573733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.574036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.574112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.574358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.574424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.574764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.574830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.575083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.575151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.575453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.575544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.575766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.575833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.576054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.576120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.576318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.576385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.576661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.576728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.577043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.577109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.577370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.577436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.577746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.577812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.578058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.578123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.578372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.578436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.578778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.578877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.579152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.579222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.579542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.549 [2024-11-18 08:09:56.579612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.549 qpair failed and we were unable to recover it. 00:36:03.549 [2024-11-18 08:09:56.579818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.579885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.580138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.580204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.580528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.580596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.580892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.580958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.581215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.581281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.581548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.581614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.581830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.581896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.582109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.582174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.582428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.582511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.582810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.582876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.583143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.583210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.583398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.583463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.583731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.583798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.584025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.584093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.584343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.584410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.584640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.584707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.584957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.585023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.585285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.585351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.585648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.585715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.585925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.585990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.586249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.586314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.586616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.586682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.586984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.587049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.587302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.587380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.587643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.587710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.588008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.588072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.588378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.588443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.588705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.588772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.588974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.589038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.589288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.589354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.589590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.589657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.589944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.590009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.590268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.590337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.590550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.590617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.590830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.590895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.591133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.591199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.550 [2024-11-18 08:09:56.591422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.550 [2024-11-18 08:09:56.591514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.550 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.591752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.591821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.592088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.592153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.592410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.592475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.592715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.592789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.593109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.593175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.593422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.593486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.593801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.593867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.594164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.594230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.594433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.594512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.594736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.594802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.595014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.595080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.595363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.595429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.595730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.595797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.596100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.596167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.596402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.596470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.596806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.596875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.597166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.597233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.597481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.597574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.597810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.597876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.598110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.598177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.598430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.598512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.598801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.598868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.599090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.599154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.599351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.599416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.599731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.599798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.599988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.600053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.600278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.600356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.600612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.600692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.551 [2024-11-18 08:09:56.600950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.551 [2024-11-18 08:09:56.601016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.551 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.601245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.601311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.601565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.601631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.601888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.601952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.602221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.602289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.602586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.602656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.602966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.603041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.603265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.603331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.603590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.603657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.603960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.604042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.604313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.604378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.604654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.604721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.605043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.605111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.605334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.605399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.605715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.605782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.606049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.606115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.606359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.606424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.606699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.606780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.607040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.607109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.607363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.607429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.607715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.607782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.608038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.608104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.608354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.608421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.608707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.608785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.608995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.609059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.609270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.609335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.609549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.609616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.609863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.609930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.610179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.610248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.610445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.610525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.610803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.610868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.611120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.611185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.824 [2024-11-18 08:09:56.611401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.824 [2024-11-18 08:09:56.611467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.824 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.611748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.611816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.612061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.612127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.612372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.612439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.612809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.612919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.613257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.613350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.613700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.613794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.614112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.614179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.614396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.614465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.614772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.614838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.615089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.615154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.615416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.615481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.615749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.615815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.616020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.616086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.616331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.616397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.616660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.616726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.616941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.617010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.617271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.617337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.617639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.617706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.617922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.617990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.618224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.618289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.618533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.618599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.618805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.618871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.619170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.619236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.619445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.619555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.619814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.619883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.620131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.620197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.620508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.620575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.620834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.620899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.621151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.621217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.621521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.621587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.621850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.621914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.622205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.622271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.622568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.622645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.622900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.622966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.623222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.623287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.623504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.623570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.623780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.623846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.825 qpair failed and we were unable to recover it. 00:36:03.825 [2024-11-18 08:09:56.624071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.825 [2024-11-18 08:09:56.624136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.624388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.624453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.624701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.624767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.625056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.625122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.625413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.625477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.625801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.625867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.626151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.626218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.626432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.626513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.626814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.626880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.627140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.627205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.627430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.627522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.627740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.627808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.628071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.628137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.628351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.628419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.628700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.628766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.629002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.629068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.629326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.629394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.629734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.629801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.630029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.630097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.630312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.630380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.630653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.630720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.630922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.630993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.631310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.631375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.631618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.631685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.631936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.632003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.632205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.632275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.632578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.632645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.632892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.632956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.633220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.633286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.633534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.633601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.633843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.633908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.634152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.634217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.634465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.634549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.634855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.634920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.635132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.635197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.635487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.635576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.635831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.635896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.636138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.826 [2024-11-18 08:09:56.636203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.826 qpair failed and we were unable to recover it. 00:36:03.826 [2024-11-18 08:09:56.636403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.636468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.636743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.636808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.637104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.637170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.637469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.637550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.637764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.637830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.638081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.638147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.638449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.638526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.638795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.638861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.639105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.639170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.639475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.639571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.639834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.639900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.640123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.640192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.640402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.640468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.640756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.640822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.641078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.641144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.641342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.641408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.641696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.641765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.642031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.642098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.642302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.642371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.642640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.642720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.642984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.643051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.643240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.643306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.643564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.643633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.643890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.643955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.644258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.644323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.644625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.644692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.644938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.645006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.645220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.645285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.645524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.645599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.645893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.645958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.646207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.646273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.646507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.646574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.646817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.646881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.647124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.647190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.647438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.647523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.647750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.647814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.648063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.648132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.827 qpair failed and we were unable to recover it. 00:36:03.827 [2024-11-18 08:09:56.648354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.827 [2024-11-18 08:09:56.648437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.648704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.648801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.649084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.649154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.649447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.649529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.649841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.649905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.650173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.650238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.650475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.650557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.650801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.650865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.651154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.651220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.651421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.651484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.651812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.651877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.652173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.652239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.652504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.652570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.652836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.652903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.653179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.653245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.653435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.653513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.653810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.653875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.654122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.654186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.654480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.654558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.654818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.654882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.655132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.655198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.655506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.655573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.655840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.655905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.656194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.656258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.656520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.656586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.656833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.656898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.657135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.657200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.657471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.657550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.657808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.657873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.658161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.658225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.658533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.658599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.828 [2024-11-18 08:09:56.658840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.828 [2024-11-18 08:09:56.658904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.828 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.659195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.659259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.659462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.659564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.659777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.659843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.660097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.660164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.660409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.660474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.660812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.660876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.661172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.661238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.661508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.661573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.661820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.661902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.662146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.662211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.662453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.662531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.662724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.662792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.663004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.663068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.663323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.663388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.663617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.663682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.663916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.663981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.664173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.664237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.664438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.664522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.664782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.664847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.665135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.665201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.665413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.665476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.665800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.665865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.666142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.666208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.666448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.666525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.666775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.666840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.667080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.667146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.667348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.667413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.667693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.667760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.668003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.668069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.668336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.668400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.668629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.668695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.668997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.669062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.669328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.669392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.669662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.669727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.669940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.670004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.670228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.670293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.670543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.670609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.829 qpair failed and we were unable to recover it. 00:36:03.829 [2024-11-18 08:09:56.670891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.829 [2024-11-18 08:09:56.670956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.671247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.671311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.671541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.671608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.671856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.671921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.672209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.672273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.672532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.672600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.672903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.672967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.673215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.673279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.673541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.673607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.673820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.673885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.674086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.674149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.674437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.674525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.674750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.674814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.675005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.675067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.675367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.675431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.675711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.675777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.676033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.676097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.676348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.676412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.676675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.676740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.676999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.677063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.677356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.677419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.677723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.677788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.678081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.678146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.678350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.678414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.678645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.678714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.678978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.679042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.679288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.679355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.679571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.679639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.679922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.679986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.680203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.680267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.680527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.680594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.680876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.680939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.681195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.681260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.681518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.681584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.681871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.681935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.682174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.682241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.682486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.682565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.682781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.830 [2024-11-18 08:09:56.682848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.830 qpair failed and we were unable to recover it. 00:36:03.830 [2024-11-18 08:09:56.683140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.683206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.683457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.683546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.683796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.683861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.684149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.684213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.684465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.684548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.684804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.684868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.685123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.685187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.685391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.685455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.685769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.685833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.686129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.686192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.686483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.686560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.686842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.686907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.687121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.687184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.687400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.687474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.687809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.687874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.688080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.688143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.688390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.688453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.688760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.688824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.689067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.689132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.689382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.689445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.689713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.689778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.689974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.690040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.690264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.690328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.690604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.690669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.690875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.690945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.691189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.691254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.691511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.691577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.691850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.691916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.692161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.692224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.692430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.692505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.692738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.692802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.693061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.693127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.693387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.693451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.693702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.693765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.694004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.694067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.694363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.694427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.694738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.694802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.831 [2024-11-18 08:09:56.695031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.831 [2024-11-18 08:09:56.695095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.831 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.695354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.695417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.695679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.695743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.696015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.696080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.696390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.696453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.696715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.696779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.696976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.697040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.697294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.697357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.697653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.697719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.697982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.698046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.698296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.698363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.698615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.698681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.698971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.699035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.699291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.699355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.699619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.699686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.699991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.700055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.700345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.700420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.700684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.700751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.701009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.701073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.701375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.701438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.701700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.701768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.701983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.702047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.702304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.702368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.702663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.702729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.702988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.703052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.703292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.703358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.703608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.703673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.703881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.703947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.704248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.704312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.704561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.704627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.704942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.705006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.705221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.705286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.705543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.705608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.705861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.705925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.706172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.706235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.706533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.706599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.706864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.706928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.707183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.707251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.832 qpair failed and we were unable to recover it. 00:36:03.832 [2024-11-18 08:09:56.707534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.832 [2024-11-18 08:09:56.707602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.707860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.707925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.708172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.708236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.708526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.708593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.708817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.708883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.709140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.709204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.709507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.709572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.709863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.709928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.710180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.710243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.710537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.710621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.710873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.710939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.711199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.711263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.711554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.711619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.711866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.711933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.712158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.712223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.712474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.712552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.712805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.712870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.713165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.713229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.713485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.713573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.713824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.713888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.714135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.714200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.714445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.714521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.714716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.714779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.715034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.715098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.715297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.715360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.715579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.715645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.715856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.715920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.716175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.716239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.716523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.716588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.716791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.716855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.717105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.717168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.717377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.717441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.717675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.717741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.833 [2024-11-18 08:09:56.718034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.833 [2024-11-18 08:09:56.718098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.833 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.718340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.718403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.718671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.718736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.719032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.719096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.719348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.719412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.719722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.719786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.720030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.720093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.720354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.720418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.720729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.720793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.721097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.721161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.721371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.721437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.721759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.721825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.722081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.722147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.722398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.722462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.722728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.722792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.723089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.723154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.723454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.723544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.723761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.723825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.724124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.724189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.724437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.724517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.724803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.724867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.725171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.725234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.725505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.725571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.725837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.725901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.726195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.726259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.726518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.726601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.726851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.726916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.727131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.727198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.727481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.727580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.727871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.727935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.728135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.728201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.728409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.728474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.728764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.728829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.729042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.729108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.729320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.729385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.729606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.729673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.729958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.730022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.730220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.834 [2024-11-18 08:09:56.730284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.834 qpair failed and we were unable to recover it. 00:36:03.834 [2024-11-18 08:09:56.730539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.730605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.730908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.730971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.731218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.731282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.731578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.731645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.731944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.732007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.732211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.732275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.732520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.732586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.732841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.732906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.733160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.733224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.733519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.733585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.733877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.733941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.734242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.734306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.734598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.734662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.734915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.734979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.735235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.735300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.735542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.735607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.735820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.735884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.736140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.736204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.736396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.736459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.736777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.736841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.737092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.737156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.737404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.737468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.737783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.737847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.738135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.738199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.738412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.738476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.738697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.738765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.738969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.739034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.739306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.739381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.739634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.739700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.739907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.739975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.740229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.740293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.740538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.740604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.740846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.740910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.741093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.741158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.741406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.741473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.741745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.741810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.742061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.742125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.742373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.835 [2024-11-18 08:09:56.742437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.835 qpair failed and we were unable to recover it. 00:36:03.835 [2024-11-18 08:09:56.742703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.742768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.743017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.743080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.743321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.743384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.743669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.743735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.743995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.744058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.744250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.744313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.744566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.744631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.744875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.744939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.745183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.745247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.745520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.745585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.745792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.745856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.746057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.746121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.746363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.746426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.746686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.746751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.746994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.747058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.747319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.747382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.747636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.747703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.747879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.747943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.748144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.748207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.748459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.748539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.748836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.748901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.749101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.749164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.749458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.749535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.749843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.749906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.750197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.750260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.750530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.750596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.750798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.750862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.751150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.751213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.751465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.751555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.751851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.751925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.752128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.752192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.752475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.752555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.752844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.752908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.753156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.753222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.753518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.753585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.753843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.753907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.754201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.754264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.836 [2024-11-18 08:09:56.754449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.836 [2024-11-18 08:09:56.754526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.836 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.754779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.754843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.755089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.755152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.755407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.755470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.755699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.755764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.756061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.756124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.756424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.756488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.756785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.756849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.757050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.757113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.757367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.757432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.757703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.757767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.758020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.758083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.758310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.758374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.758570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.758635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.758923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.758987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.759289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.759354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.759612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.759677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.759926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.759989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.760277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.760341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.760648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.760714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.760929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.760994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.761259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.761323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.761548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.761616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.761874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.761939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.762243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.762307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.762552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.762618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.762870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.762933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.763185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.763251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.763516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.763583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.763789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.763853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.764081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.764145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.764395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.764459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.764753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.764817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.765083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.837 [2024-11-18 08:09:56.765147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.837 qpair failed and we were unable to recover it. 00:36:03.837 [2024-11-18 08:09:56.765440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.765519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.765817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.765880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.766144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.766209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.766523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.766588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.766839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.766903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.767146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.767209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.767511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.767576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.767839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.767902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.768208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.768270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.768537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.768603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.768848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.768912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.769157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.769220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.769534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.769600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.769816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.769879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.770181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.770244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.770502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.770571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.770883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.770946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.771148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.771212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.771432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.771509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.771763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.771830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.772034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.772100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.772307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.772371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.772674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.772741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.773035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.773099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.773350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.773413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.773680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.773757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.774051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.774114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.774407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.774473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.774734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.774800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.775006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.775072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.775357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.775422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.775726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.775792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.776058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.776122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.776422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.776486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.776808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.776873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.777058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.777122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.777430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.838 [2024-11-18 08:09:56.777509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.838 qpair failed and we were unable to recover it. 00:36:03.838 [2024-11-18 08:09:56.777803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.777867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.778125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.778192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.778396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.778462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.778730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.778795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.779088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.779152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.779443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.779521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.779790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.779855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.780113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.780179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.780424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.780513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.780772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.780837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.781089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.781153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.781445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.781525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.781821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.781886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.782143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.782207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.782462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.782541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.782810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.782875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.783124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.783189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.783505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.783572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.783836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.783901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.784155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.784219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.784537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.784602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.784873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.784938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.785197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.785262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.785457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.785541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.785829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.785894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.786197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.786261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.786465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.786544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.786798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.786863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.787124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.787202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.787520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.787585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.787823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.787887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.788189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.788253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.788543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.788610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.788861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.788926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.789161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.789226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.789477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.789555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.789802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.789869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.790113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.839 [2024-11-18 08:09:56.790178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.839 qpair failed and we were unable to recover it. 00:36:03.839 [2024-11-18 08:09:56.790423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.790486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.790747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.790813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.791081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.791146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.791441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.791517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.791792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.791857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.792116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.792183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.792466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.792555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.792856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.792920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.793187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.793250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.793466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.793550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.793793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.793858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.794101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.794165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.794417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.794481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.794722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.794787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.795038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.795102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.795315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.795379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.795640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.795707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.796021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.796087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.796382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.796445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.796771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.796836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.797086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.797150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.797442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.797521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.797776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.797842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.798085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.798150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.798399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.798467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.798747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.798811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.799027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.799090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.799377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.799441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.799704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.799771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.800047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.800113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.800374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.800448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.800768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.800832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.801133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.801197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.801437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.801517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.801813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.801877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.802126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.802192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.802401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.802466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.840 [2024-11-18 08:09:56.802732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.840 [2024-11-18 08:09:56.802797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.840 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.803036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.803100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.803350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.803418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.803658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.803723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.803935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.803999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.804243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.804307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.804607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.804672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.804935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.805000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.805298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.805362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.805620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.805685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.805953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.806017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.806283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.806349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.806613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.806680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.806876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.806941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.807179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.807243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.807474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.807557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.807803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.807868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.808067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.808131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.808369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.808433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.808767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.808834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.809065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.809130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.809393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.809457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.809770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.809836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.810076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.810140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.810390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.810453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.810727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.810794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.811090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.811154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.811423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.811486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.811762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.811828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.812122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.812187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.812471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.812572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.812825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.812889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.813185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.813249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.813452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.813550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.813850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.813914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.814217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.814281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.814606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.814673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.814976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.841 [2024-11-18 08:09:56.815039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.841 qpair failed and we were unable to recover it. 00:36:03.841 [2024-11-18 08:09:56.815287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.815352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.815655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.815722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.816012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.816076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.816318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.816384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.816660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.816726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.817030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.817093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.817387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.817451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.817765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.817829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.818079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.818144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.818398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.818463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.818776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.818842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.819085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.819151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.819448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.819529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.819836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.819902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.820121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.820185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.820430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.820531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.820793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.820857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.821145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.821210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.821522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.821588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.821879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.821943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.822204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.822267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.822471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.822553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.822830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.822896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.823156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.823220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.823473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.823557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.823818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.823883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.824172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.824235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.824482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.824573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.824823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.824888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.825137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.825200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.825457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.825539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.825836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.825899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.826183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.826247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.842 [2024-11-18 08:09:56.826553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.842 [2024-11-18 08:09:56.826619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.842 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.826845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.826908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.827196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.827270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.827576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.827642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.827948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.828012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.828253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.828317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.828560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.828625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.828921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.828984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.829284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.829348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.829549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.829614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.829810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.829875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.830126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.830190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.830478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.830557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.830772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.830838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.831125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.831189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.831477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.831556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.831776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.831840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.832027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.832093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.832340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.832404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.832661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.832726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.833015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.833077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.833361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.833426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.833741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.833807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.834066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.834130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.834416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.834479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.834699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.834763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.834966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.835030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.835325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.835388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.835660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.835725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.835981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.836048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.836291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.836356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.836657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.836722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.836936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.837000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.837211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.837274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.837460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.837541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.837808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.837873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.838084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.838147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.838354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.843 [2024-11-18 08:09:56.838420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.843 qpair failed and we were unable to recover it. 00:36:03.843 [2024-11-18 08:09:56.838683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.838748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.838994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.839058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.839303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.839366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.839665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.839730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.839976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.840052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.840346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.840413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.840658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.840723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.840955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.841019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.841265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.841331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.841531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.841597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.841885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.841948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.842193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.842260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.842558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.842624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.842873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.842937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.843187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.843250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.843514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.843582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.843844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.843908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.844178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.844243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.844560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.844628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.844924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.844987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.845312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.845376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.845648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.845714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.845961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.846026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.846326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.846390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.846658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.846726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.846930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.846994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.847216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.847282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.847547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.847613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.847896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.847960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.848259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.848323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.848577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.848643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.848910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.848975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.849190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.849254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.849524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.849590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.849839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.849903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.850194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.850257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.850469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.850553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.850845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.844 [2024-11-18 08:09:56.850910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.844 qpair failed and we were unable to recover it. 00:36:03.844 [2024-11-18 08:09:56.851197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.851261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.851518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.851583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.851812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.851877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.852080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.852142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.852380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.852443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.852730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.852794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.853075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.853149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.853399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.853463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.853772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.853836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.854084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.854149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.854447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.854527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.854791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.854854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.855115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.855178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.855387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.855453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.855690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.855754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.855947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.856011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.856294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.856358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.856601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.856667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.856918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.856983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.857185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.857251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.857567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.857633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.857838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.857903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.858150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.858214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.858519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.858584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.858835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.858903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.859149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.859216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.859425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.859504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.859752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.859817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.860107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.860171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.860419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.860482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.860798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.860861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.861146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.861210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.861400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.861463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.861776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.861841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.862027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.862091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.862335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.862399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.862677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.862742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.845 [2024-11-18 08:09:56.862931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.845 [2024-11-18 08:09:56.862996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.845 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.863232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.863295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.863586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.863651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.863943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.864007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.864247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.864310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.864520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.864585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.864802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.864867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.865054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.865117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.865338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.865402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.865716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.865792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.866047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.866110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.866361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.866425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.866741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.866807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.867055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.867119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.867332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.867395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.867642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.867707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.867998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.868061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.868338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.868402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.868670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.868736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.868984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.869047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.869317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.869382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.869646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.869712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.869968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.870031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.870287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.870355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.870652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.870718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.871004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.871067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.871314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.871377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.871595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.871661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.871903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.871967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.872270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.872334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.872597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.872662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.872898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.872961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.873214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.873278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.873526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.873592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.873801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.873864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.874100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.874164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.874420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.874487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.874801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.846 [2024-11-18 08:09:56.874865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.846 qpair failed and we were unable to recover it. 00:36:03.846 [2024-11-18 08:09:56.875153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.875217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.875412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.875476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.875739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.875804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.876049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.876114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.876371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.876434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.876808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.876928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.877247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.877337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.877725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.877818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.878131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.878224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.878584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.878674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.879020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.879109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.879467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.879560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.879883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.879949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.880237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.880305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.880579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.880644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.880904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.880970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.881188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.881251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.881524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.881614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.881931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.882021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.882330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.882419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.882802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.882891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.883165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.883252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.883569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.883617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.883825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.883873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.884032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.884067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.884243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.884277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.884408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.884440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.884574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.884620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.884792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.884838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.884977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.885024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.885223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.885287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.885438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.885484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.885665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.885712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.847 qpair failed and we were unable to recover it. 00:36:03.847 [2024-11-18 08:09:56.885881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.847 [2024-11-18 08:09:56.885928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.886067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.886103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.886353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.886418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.886669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.886741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.886955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.887021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.887305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.887353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.887512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.887561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.887885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.887970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.888268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.888346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.888517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.888579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.888772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.888818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.888996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.889043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.889214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.889259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.889425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.889470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.889691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.889736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.889889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.889936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.890113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.890157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.890359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.890405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.890565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.890610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.890820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.890879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.891053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.891098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.891242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.891288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.891429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.891500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.891665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.891711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.891867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.891902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.892026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.892060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.892166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.892198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.892409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.892534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.892671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.892704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.892817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.892850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.892988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.893047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.893258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.893306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.893585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.893618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.893751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.893810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.893952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.894048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.894396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.894483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.894705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.894750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.895035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.895121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.848 [2024-11-18 08:09:56.895393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.848 [2024-11-18 08:09:56.895482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.848 qpair failed and we were unable to recover it. 00:36:03.849 [2024-11-18 08:09:56.895727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.849 [2024-11-18 08:09:56.895774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.849 qpair failed and we were unable to recover it. 00:36:03.849 [2024-11-18 08:09:56.896026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.849 [2024-11-18 08:09:56.896092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.849 qpair failed and we were unable to recover it. 00:36:03.849 [2024-11-18 08:09:56.896354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.849 [2024-11-18 08:09:56.896388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.849 qpair failed and we were unable to recover it. 00:36:03.849 [2024-11-18 08:09:56.896554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.849 [2024-11-18 08:09:56.896599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.849 qpair failed and we were unable to recover it. 00:36:03.849 [2024-11-18 08:09:56.896735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.849 [2024-11-18 08:09:56.896778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.849 qpair failed and we were unable to recover it. 00:36:03.849 [2024-11-18 08:09:56.896934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.849 [2024-11-18 08:09:56.896981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.849 qpair failed and we were unable to recover it. 00:36:03.849 [2024-11-18 08:09:56.897117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.849 [2024-11-18 08:09:56.897162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.849 qpair failed and we were unable to recover it. 00:36:03.849 [2024-11-18 08:09:56.897479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.849 [2024-11-18 08:09:56.897561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.849 qpair failed and we were unable to recover it. 00:36:03.849 [2024-11-18 08:09:56.897700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.849 [2024-11-18 08:09:56.897744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.849 qpair failed and we were unable to recover it. 00:36:03.849 [2024-11-18 08:09:56.897930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.849 [2024-11-18 08:09:56.897975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.849 qpair failed and we were unable to recover it. 00:36:03.849 [2024-11-18 08:09:56.898144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.849 [2024-11-18 08:09:56.898231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.849 qpair failed and we were unable to recover it. 00:36:03.849 [2024-11-18 08:09:56.898541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.849 [2024-11-18 08:09:56.898574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.849 qpair failed and we were unable to recover it. 00:36:03.849 [2024-11-18 08:09:56.898719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.849 [2024-11-18 08:09:56.898752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.849 qpair failed and we were unable to recover it. 00:36:03.849 [2024-11-18 08:09:56.898922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.849 [2024-11-18 08:09:56.898955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.849 qpair failed and we were unable to recover it. 00:36:03.849 [2024-11-18 08:09:56.899164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.849 [2024-11-18 08:09:56.899227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.849 qpair failed and we were unable to recover it. 00:36:03.849 [2024-11-18 08:09:56.899441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.849 [2024-11-18 08:09:56.899521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:03.849 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.899662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.899706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.899874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.899919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.900188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.900280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.900570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.900617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.900762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.900794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.900911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.900963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.901173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.901235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.901522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.901576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.901716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.901751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.901891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.901926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.902056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.902091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.902231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.902266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.902401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.902438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.902580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.902608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.902722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.902748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.902860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.902893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.902994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.903026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.903130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.903163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.903293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.903372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.903585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.903612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.903700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.903725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.903855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.903887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.903996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.904046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.904214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.904247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.904363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.904388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.904467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.904499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.904612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.904648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.904765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.904823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.904965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.905011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-18 08:09:56.905150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-18 08:09:56.905196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.905452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.905487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.905614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.905649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.905810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.905856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.906084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.906166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.906388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.906422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.906580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.906606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.906682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.906708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.906850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.906896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.907183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.907245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.907550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.907607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.907747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.907784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.907962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.908040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.908342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.908431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.908678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.908716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.908817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.908844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.908939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.908965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.909119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.909183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.909449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.909505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.909640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.909677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.909819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.909853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.909990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.910035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.910179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.910272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.910402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.910437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.910595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.910631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.910753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.910790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.910940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.910966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.911051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.911076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.911183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.911208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.911283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.911307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.911393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.911418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.911508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.911534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.911616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.911641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.911734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.911759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.911841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.911865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.911981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.912006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.912120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.912146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.912258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.126 [2024-11-18 08:09:56.912283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-18 08:09:56.912401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.912426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.912506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.912532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.912621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.912645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.912730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.912754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.912873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.912897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.912988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.913012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.913139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.913177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.913326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.913353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.913440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.913467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.913556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.913583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.913680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.913707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.913781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.913807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.913950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.913977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.914132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.914158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.914240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.914266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.914364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.914392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.914497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.914523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.914609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.914634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.914749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.914774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.914865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.914891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.914983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.915009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.915104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.915131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.915259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.915284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.915395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.915421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.915513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.915541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.915654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.915679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.915822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.915855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.916023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.916057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.916397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.916430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.916601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.916628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.916708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.916734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.916993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.917018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.917159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.917191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.917434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.917475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.917622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.917648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.917735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.917762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.917968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.918038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.127 [2024-11-18 08:09:56.918341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.127 [2024-11-18 08:09:56.918405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.127 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.918574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.918600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.918711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.918736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.918863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.918925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.919216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.919278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.919523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.919568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.919656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.919681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.919764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.919789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.919878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.919903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.919992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.920017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.920097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.920144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.920295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.920329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.920526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.920575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.920683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.920708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.920814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.920839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.920927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.920952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.921102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.921159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.921338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.921371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.921484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.921524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.921659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.921685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.921769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.921810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.921942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.921974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.922164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.922189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.922441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.922475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.922649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.922674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.922758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.922805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.922909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.922943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.923103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.923150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.923304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.923337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.923532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.923579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.923684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.923709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.923849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.923881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.923986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.924019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.924171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.924196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.924289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.924314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.924463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.924524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.924668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.924696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.924812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.924838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.128 [2024-11-18 08:09:56.925006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.128 [2024-11-18 08:09:56.925041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.128 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.925294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.925354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.925604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.925631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.925722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.925748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.925915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.925948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.926078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.926112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.926289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.926325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.926471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.926512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.926627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.926661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.926772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.926806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.926919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.926953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.927095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.927127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.927271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.927305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.927413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.927447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.927567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.927601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.927718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.927751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.927883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.927916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.928019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.928052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.928165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.928198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.928334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.928367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.928517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.928551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.928690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.928727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.928904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.928938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.929046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.929080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.929284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.929343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.929587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.929623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.929791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.929825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.930057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.930107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.930374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.930408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.930543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.930577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.930709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.930743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.930953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.931017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.129 [2024-11-18 08:09:56.931298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.129 [2024-11-18 08:09:56.931357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.129 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.931617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.931652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.931826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.931888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.932170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.932229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.932507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.932541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.932712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.932746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.932979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.933049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.933280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.933340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.933563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.933599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.933735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.933769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.934055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.934115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.934293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.934353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.934584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.934618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.934791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.934825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.934937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.934971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.935082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.935115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.935312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.935345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.935602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.935636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.935816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.935849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.935960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.935993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.936142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.936175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.936319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.936353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.936522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.936556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.936664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.936699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.936841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.936875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.937018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.937052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.937195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.937228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.937422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.937483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.937672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.937706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.937884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.937943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.938159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.938229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.938462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.938551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.938696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.938730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.938935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.938996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.939173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.939235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.939438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.939527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.939680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.939714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.939925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.939959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.130 qpair failed and we were unable to recover it. 00:36:04.130 [2024-11-18 08:09:56.940156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.130 [2024-11-18 08:09:56.940216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.940435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.940469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.940586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.940619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.940761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.940795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.940909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.940943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.941093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.941129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.941271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.941305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.941524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.941584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.941764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.941833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.942055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.942129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.942398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.942431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.942588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.942622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.942741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.942775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.942909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.942955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.943895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.943935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.944116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.944152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.944269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.944304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.944425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.944459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.944616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.944642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.944729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.944755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.944873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.944899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.944989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.945016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.945109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.945136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.945277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.945304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.945388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.945414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.945528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.945554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.945642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.945668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.945781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.945807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.946004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.946037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.946177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.946210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.946353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.946396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.946601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.946644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.946796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.946840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.947017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.947063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.947279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.947322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.948637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.948668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.948757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.948783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.131 [2024-11-18 08:09:56.948901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.131 [2024-11-18 08:09:56.948927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.131 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.949010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.949037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.949116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.949141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.949258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.949284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.949364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.949390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.949485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.949517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.949611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.949637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.949730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.949756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.949868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.949894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.949985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.950012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.950165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.950191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.950279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.950311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.950400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.950427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.950517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.950544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.950662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.950688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.950802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.950828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.950919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.950945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.951086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.951112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.951197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.951223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.951340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.951366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.951559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.951585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.951700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.951725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.951863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.951891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.952023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.952052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.952133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.952176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.952288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.952315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.952403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.952428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.952518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.952546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.952636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.952663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.952782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.952808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.952925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.952950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.953069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.953095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.953179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.953204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.953320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.953346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.953465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.953500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.953591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.953634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.953733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.953758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.953884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.953910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.132 [2024-11-18 08:09:56.953996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.132 [2024-11-18 08:09:56.954022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.132 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.954111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.954141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.954239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.954265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.954351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.954377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.954462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.954504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.954589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.954616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.954705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.954733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.954873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.954899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.955010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.955036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.955124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.955150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.955264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.955290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.955407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.955432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.955526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.955554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.955688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.955743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.955913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.955941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.956040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.956066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.956214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.956240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.956355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.956381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.956486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.956544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.956697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.956746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.956896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.956939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.957073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.957100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.957251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.957277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.957363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.957388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.957484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.957525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.957663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.957691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.957858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.957906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.958073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.958099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.958202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.958228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.958320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.958346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.958482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.958514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.958658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.958705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.958865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.958893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.959029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.959054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.959172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.959197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.959283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.959309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.959423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.959449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.959599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.959628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.133 qpair failed and we were unable to recover it. 00:36:04.133 [2024-11-18 08:09:56.959730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.133 [2024-11-18 08:09:56.959756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.959908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.959936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.960091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.960131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.960283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.960311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.960404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.960430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.960579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.960607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.960711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.960753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.960859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.960901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.961037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.961062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.961169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.961194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.961306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.961331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.961451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.961476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.961575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.961619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.961744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.961777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.961902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.961927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.962074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.962099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.962205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.962231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.962375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.962401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.962525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.962550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.962635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.962661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.962757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.962782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.962869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.962895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.963008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.963032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.963125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.963150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.963262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.963287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.963401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.963426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.963552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.963581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.963697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.963723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.963811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.963837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.963951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.963981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.964089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.964115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.964226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.964253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.964345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.964372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.134 [2024-11-18 08:09:56.964508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.134 [2024-11-18 08:09:56.964537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.134 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.964682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.964709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.964822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.964849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.964999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.965026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.965115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.965142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.965287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.965320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.965473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.965535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.965658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.965685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.965821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.965853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.965946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.965979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.966138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.966171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.966301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.966333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.966443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.966485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.966616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.966643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.966762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.966792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.966921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.966969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.967179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.967246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.967380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.967407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.967555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.967583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.967675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.967703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.967823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.967850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.967971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.967999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.968091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.968118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.968255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.968285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.968406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.968433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.968540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.968569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.968658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.968686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.968804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.968831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.968953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.968981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.969127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.969162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.969296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.969329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.969515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.969562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.969734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.969766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.969917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.969958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.970107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.970141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.970304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.970356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.970459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.970487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.970608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.970666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.135 [2024-11-18 08:09:56.970862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.135 [2024-11-18 08:09:56.970912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.135 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.971083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.971129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.971252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.971280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.971428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.971455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.971652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.971695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.971974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.972020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.972186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.972220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.972343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.972370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.972506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.972535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.972627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.972655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.972825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.972880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.973019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.973054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.973201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.973251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.973383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.973412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.973506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.973534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.973618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.973646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.973777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.973823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.973987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.974044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.974212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.974245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.974347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.974374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.974496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.974524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.974670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.974720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.974883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.974917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.975109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.975162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.975251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.975278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.975369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.975400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.975559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.975587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.975702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.975730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.975846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.975874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.975975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.976003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.976130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.976181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.976321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.976349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.976441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.976468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.976564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.976591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.976697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.976738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.976869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.976898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.977022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.977051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.977180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.977208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.136 qpair failed and we were unable to recover it. 00:36:04.136 [2024-11-18 08:09:56.977327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.136 [2024-11-18 08:09:56.977355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.977448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.977477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.977618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.977653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.977794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.977843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.978031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.978081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.978326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.978375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.978570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.978598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.978716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.978745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.978906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.978942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.979133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.979182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.979370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.979403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.979547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.979576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.979695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.979724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.979931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.979974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.980208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.980245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.980419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.980474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.980617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.980645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.980739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.980767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.980872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.980900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.981046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.981081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.981255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.981295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.981438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.981472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.981613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.981654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.981752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.981781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.981898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.981925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.982147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.982195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.982441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.982483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.982643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.982670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.982780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.982814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.982993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.983059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.983294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.983358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.983567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.983595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.983715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.983744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.983864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.983915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.984108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.984169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.984357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.984436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.984629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.984657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.984753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.984781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.137 qpair failed and we were unable to recover it. 00:36:04.137 [2024-11-18 08:09:56.984883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.137 [2024-11-18 08:09:56.984910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.985029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.985056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.985260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.985334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.985509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.985539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.985658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.985687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.985835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.985878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.986060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.986121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.986288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.986323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.986505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.986550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.986642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.986670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.986787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.986819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.987008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.987056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.987283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.987332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.987520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.987577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.987702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.987730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.987846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.987874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.987985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.988030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.988234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.988283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.988468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.988504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.988623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.988651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.988823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.988859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.988990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.989051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.989283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.989332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.989535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.989563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.989687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.989715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.989812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.989840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.989997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.990032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.990301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.990352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.990520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.990566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.990691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.990720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.990828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.990857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.990946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.990973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.991147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.991192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.991359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.991387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.991535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.991563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.991710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.991738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.991933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.138 [2024-11-18 08:09:56.991969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.138 qpair failed and we were unable to recover it. 00:36:04.138 [2024-11-18 08:09:56.992110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.992143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.992314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.992366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.992606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.992655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.992803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.992849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.993038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.993089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.993301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.993345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.993520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.993608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.993831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.993883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.994037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.994087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.994291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.994330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.994443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.994479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.994686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.994730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.994876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.994923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.995086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.995148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.995297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.995377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.995571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.995622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.995768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.995817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.996069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.996114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.996246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.996318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.996478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.996549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.996774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.996819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.996987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.997049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.997244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.997293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.997527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.997577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.997809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.997858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.998056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.998106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.998295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.998344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.998507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.998571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.998755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.998800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.998988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.999046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.999229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.999277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.999468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.999528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.999728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:56.999779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.139 [2024-11-18 08:09:56.999987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.139 [2024-11-18 08:09:57.000032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.139 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.000175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.000211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.000444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.000502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.000704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.000753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.000950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.000999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.001207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.001242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.001346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.001382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.001614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.001650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.001762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.001798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.001992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.002041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.002253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.002296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.002527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.002577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.002784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.002829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.003044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.003110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.003323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.003374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.003538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.003587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.003773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.003821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.004019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.004076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.004313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.004362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.004535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.004585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.004805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.004848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.005045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.005094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.005246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.005319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.005501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.005551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.005733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.005782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.006013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.006062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.006252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.006309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.006573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.006624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.006834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.006879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.007021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.007066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.007276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.007325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.007523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.007573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.007717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.007777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.007989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.008039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.008219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.008267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.008418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.008501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.008666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.008712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.008888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.140 [2024-11-18 08:09:57.008951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.140 qpair failed and we were unable to recover it. 00:36:04.140 [2024-11-18 08:09:57.009096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.009147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.009377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.009427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.009699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.009753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.009992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.010044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.010254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.010316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.010549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.010585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.010742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.010777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.010968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.011020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.011300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.011364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.011641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.011688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.011879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.011926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.012124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.012183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.012339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.012374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.012649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.012702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.012911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.012965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.013200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.013277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.013469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.013554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.013803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.013855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.014120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.014172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.014409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.014444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.014602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.014664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.014874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.014926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.015142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.015194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.015404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.015455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.015674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.015726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.015929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.015982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.016185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.016238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.016436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.016518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.016705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.016767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.016936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.016987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.017237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.017289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.017455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.017531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.017708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.017751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.017930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.017972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.018220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.018271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.018474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.018519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.018698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.018750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.141 qpair failed and we were unable to recover it. 00:36:04.141 [2024-11-18 08:09:57.018952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.141 [2024-11-18 08:09:57.019004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.019242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.019294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.019546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.019599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.019813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.019855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.020047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.020082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.020261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.020345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.020545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.020597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.020841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.020883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.021116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.021167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.021376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.021420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.021611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.021675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.021873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.021925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.022171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.022223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.022506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.022561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.022792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.022842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.023082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.023133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.023344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.023399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.023654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.023710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.023907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.023963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.024183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.024237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.024437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.024513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.024780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.024836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.025041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.025095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.025328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.025381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.025660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.025712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.025928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.025980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.026232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.026274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.026452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.026519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.026687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.026777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.027099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.027163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.027472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.027566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.027784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.027836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.028053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.028104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.028353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.028404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.028598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.028653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.028879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.028934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.029203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.029245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.029415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.142 [2024-11-18 08:09:57.029486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.142 qpair failed and we were unable to recover it. 00:36:04.142 [2024-11-18 08:09:57.029773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.029828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.030028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.030092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.030401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.030464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.030744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.030807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.031107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.031171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.031475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.031564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.031793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.031859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.032147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.032204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.032388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.032442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.032674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.032729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.032899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.032953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.033224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.033278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.033585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.033642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.033894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.033948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.034164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.034219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.034476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.034546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.034761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.034815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.035068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.035122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.035418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.035481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.035729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.035783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.036032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.036097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.036402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.036467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.036792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.036855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.037127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.037202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.037465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.037568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.037841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.037905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.038214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.038277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.038517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.038576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.038791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.038851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.039026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.039081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.039340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.039407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.039639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.039696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.039946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.143 [2024-11-18 08:09:57.040010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.143 qpair failed and we were unable to recover it. 00:36:04.143 [2024-11-18 08:09:57.040281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.040345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.040644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.040701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.040888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.040944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.041117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.041172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.041406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.041461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.041753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.041816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.042069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.042124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.042343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.042398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.042615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.042681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.042994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.043059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.043325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.043388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.043669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.043737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.044013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.044078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.044400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.044465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.044747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.044811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.045120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.045184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.045454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.045550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.045818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.045874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.046133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.046188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.046415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.046479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.046726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.046781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.047030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.047095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.047406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.047471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.047800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.047864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.048146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.048210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.048468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.048560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.048832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.048887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.049064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.049129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.049399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.049464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.049749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.049824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.050046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.050111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.050327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.050392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.050729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.050796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.051023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.051088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.051302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.051367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.051624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.051680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.051925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.051991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.144 qpair failed and we were unable to recover it. 00:36:04.144 [2024-11-18 08:09:57.052228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.144 [2024-11-18 08:09:57.052293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.052579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.052638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.052856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.052912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.053103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.053157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.053392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.053447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.053693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.053757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.054063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.054129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.054441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.054534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.054795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.054851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.055109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.055163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.055397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.055462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.055711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.055767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.056028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.056084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.056258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.056323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.056565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.056622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.056891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.056946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.057200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.057255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.057542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.057599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.057816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.057872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.058123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.058188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.058457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.058542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.058826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.058882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.059060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.059126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.059365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.059420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.059684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.059740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.060006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.060063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.060294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.060349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.060555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.060616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.060880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.060939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.061212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.061273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.061472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.061569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.061840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.061899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.062130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.062190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.062481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.062576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.062807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.062866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.063093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.063152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.063417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.063481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.063790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.145 [2024-11-18 08:09:57.063849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.145 qpair failed and we were unable to recover it. 00:36:04.145 [2024-11-18 08:09:57.064067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.064126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.064311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.064372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.064674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.064740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.065006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.065070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.065271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.065330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.065569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.065629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.065872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.065936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.066252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.066316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.066535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.066616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.066914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.066974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.067168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.067229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.067472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.067575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.067811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.067872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.068184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.068248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.068445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.068523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.068725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.068788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.068995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.069058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.069347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.069411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.069729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.069795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.070060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.070124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.070412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.070476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.070790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.070854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.071087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.071151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.071437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.071513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.071760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.071825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.072053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.072117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.072366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.072429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.072683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.072750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.073056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.073121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.073416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.073480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.073755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.073819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.074122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.074185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.074487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.074575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.074865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.074929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.075236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.075300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.075560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.075626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.075911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.075975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.146 [2024-11-18 08:09:57.076224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.146 [2024-11-18 08:09:57.076289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.146 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.076505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.076572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.076815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.076880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.077079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.077144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.077436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.077526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.077789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.077860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.078143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.078230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.078484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.078588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.078817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.078902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.079186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.079255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.079520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.079592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.079898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.079967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.080237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.080303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.080614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.080694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.081002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.081070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.081390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.081471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.081764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.081832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.082068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.082143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.082463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.082564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.082825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.082893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.083165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.083247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.083522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.083601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.083908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.083978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.084243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.084321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.084590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.084676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.084938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.085009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.085268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.085342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.085567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.085636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.085881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.085952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.086217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.086291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.086601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.086678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.086947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.087016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.087285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.087358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.087607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.087689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.087950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.088031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.088280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.088369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.088712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.088788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.089109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.089186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.147 qpair failed and we were unable to recover it. 00:36:04.147 [2024-11-18 08:09:57.089443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.147 [2024-11-18 08:09:57.089561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.089844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.089915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.090186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.090266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.090473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.090563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.090821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.090895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.091110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.091177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.091487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.091572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.091852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.091936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.092214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.092284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.092538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.092612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.092893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.092977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.093246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.093323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.093577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.093648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.093890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.093957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.094271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.094353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.094636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.094705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.095013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.095094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.095372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.095447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.095732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.095801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.096066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.096147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.096366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.096436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.096778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.096846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.097117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.097189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.097409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.097475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.097871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.097943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.098176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.098247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.098537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.098616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.098933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.099000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.099207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.099283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.099508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.099577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.099855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.099933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.100170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.100240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.100530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.100600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.148 qpair failed and we were unable to recover it. 00:36:04.148 [2024-11-18 08:09:57.100891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.148 [2024-11-18 08:09:57.100972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.101250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.101325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.101643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.101724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.101955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.102036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.102321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.102410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.102740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.102823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.103109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.103177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.103449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.103541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.103826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.103906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.104188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.104256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.104530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.104604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.104862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.104940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.105207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.105286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.105544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.105622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.105884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.105951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.106176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.106243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.106523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.106600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.106867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.106943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.107201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.107282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.107525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.107607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.107872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.107938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.108153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.108219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.108448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.108542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.108825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.108909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.109234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.109302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.109567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.109638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.109902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.109971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.110261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.110337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.110624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.110702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.110978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.111049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.111356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.111433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.111708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.111776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.112002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.112071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.112320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.112391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.112680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.112761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.113019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.113101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.113382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.113449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.113707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.149 [2024-11-18 08:09:57.113774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.149 qpair failed and we were unable to recover it. 00:36:04.149 [2024-11-18 08:09:57.114097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.114178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.114520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.114590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.114854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.114935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.115180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.115262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.115570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.115639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.115865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.115934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.116256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.116349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.116588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.116672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.116950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.117017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.117260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.117331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.117579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.117649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.117880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.117949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.118224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.118303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.118580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.118657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.118964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.119032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.119286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.119353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.119605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.119679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.119941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.120011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.120338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.120414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.120686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.120756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.121036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.121112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.121425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.121511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.121750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.121818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.122142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.122224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.122504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.122585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.122872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.122953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.123215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.123289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.123506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.123583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.123887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.123955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.124190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.124256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.124530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.124602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.124855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.124926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.125149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.125216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.125582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.125700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.125976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.126065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.126354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.126425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.126691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.150 [2024-11-18 08:09:57.126764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.150 qpair failed and we were unable to recover it. 00:36:04.150 [2024-11-18 08:09:57.127064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.127155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.127525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.127617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.127958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.128041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.128268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.128335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.128616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.128683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.128917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.128998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.129234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.129308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.129552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.129637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.129946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.130015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.130337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.130426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.130744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.130819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.131059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.131139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.131382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.131458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.131748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.131821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.132086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.132155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.132445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.132547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.132772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.132852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.133168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.133246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.133561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.133637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.133867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.133934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.134249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.134333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.134619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.134689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.135000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.135086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.135387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.135457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.135774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.135860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.136157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.136223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.136484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.136568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.136869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.136932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.137163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.137227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.137468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.137575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.137836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.137899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.138122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.138189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.138428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.138512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.138769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.138833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.139124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.139187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.139430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.139511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.139834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.139899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.151 [2024-11-18 08:09:57.140204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.151 [2024-11-18 08:09:57.140269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.151 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.140472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.140552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.140777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.140843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.141101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.141165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.141449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.141531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.141835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.141900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.142155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.142219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.142471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.142553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.142847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.142911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.143157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.143221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.143469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.143551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.143853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.143917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.144210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.144276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.144576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.144642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.144949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.145013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.145305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.145369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.145622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.145688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.145940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.146006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.146235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.146300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.146549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.146615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.146900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.146964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.147274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.147338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.147590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.147656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.147855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.147917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.148165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.148231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.148432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.148518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.148779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.148844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.149146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.149211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.149542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.149609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.149912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.149977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.150184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.150247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.150508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.150575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.150819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.150883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.151139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.152 [2024-11-18 08:09:57.151203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.152 qpair failed and we were unable to recover it. 00:36:04.152 [2024-11-18 08:09:57.151517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.151583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.151825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.151889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.152147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.152212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.152440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.152522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.152830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.152894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.153138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.153212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.153459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.153542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.153741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.153805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.154000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.154066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.154359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.154423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.154645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.154713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.154920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.154985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.155225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.155290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.155547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.155613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.155907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.155972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.156216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.156279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.156510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.156575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.156811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.156875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.157071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.157134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.157430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.157512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.157812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.157876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.158117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.158180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.158426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.158508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.158726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.158793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.159086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.159150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.159438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.159527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.159777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.159841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.160126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.160190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.160440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.160527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.160789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.160855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.161155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.161219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.161531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.161596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.161848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.161913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.162200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.162263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.162517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.162582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.162828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.162892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.163141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.163207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.153 [2024-11-18 08:09:57.163397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.153 [2024-11-18 08:09:57.163461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.153 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.163700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.163764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.164010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.164073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.164285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.164349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.164637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.164703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.164990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.165053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.165341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.165405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.165675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.165740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.166040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.166114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.166318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.166381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.166658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.166724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.166982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.167044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.167282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.167346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.167552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.167620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.167890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.167953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.168170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.168235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.168526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.168592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.168834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.168897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.169184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.169247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.169517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.169583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.169819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.169882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.170133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.170196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.170452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.170531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.170777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.170840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.171075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.171138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.171345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.171409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.171702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.171767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.172013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.172078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.172325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.172389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.172689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.172754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.173063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.173128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.173386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.173450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.173663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.173728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.173962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.174026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.174236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.174300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.174603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.174668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.174965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.175030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.175328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.154 [2024-11-18 08:09:57.175391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.154 qpair failed and we were unable to recover it. 00:36:04.154 [2024-11-18 08:09:57.175702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.175766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.176028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.176093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.176340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.176406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.176668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.176734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.177023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.177086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.177289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.177355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.177638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.177704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.177894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.177958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.178204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.178267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.178552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.178617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.178885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.178958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.179213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.179276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.179529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.179594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.179886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.179950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.180159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.180222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.180483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.180564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.180812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.180879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.181096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.181160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.181441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.181519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.181767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.181833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.182089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.182152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.182409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.182473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.182739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.182805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.183049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.183112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.183406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.183470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.183760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.183824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.184086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.184149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.184355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.184420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.184631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.184695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.184952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.185016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.185312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.185376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.185584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.185650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.185916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.185980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.186224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.186288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.186536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.186601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.186787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.186853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.187091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.187157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.155 [2024-11-18 08:09:57.187383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.155 [2024-11-18 08:09:57.187447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.155 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.187727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.187791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.188085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.188148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.188448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.188525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.188830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.188894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.189194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.189258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.189514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.189580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.189872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.189936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.190139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.190208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.190454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.190532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.190822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.190887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.191131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.191197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.191446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.191533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.191821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.191901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.192158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.192223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.192444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.192527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.192793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.192858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.193164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.193228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.193526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.193591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.193854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.193918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.194215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.194279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.194582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.194647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.194890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.194955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.195202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.195266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.195515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.195581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.195838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.195902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.196120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.196183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.196400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.196466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.196682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.196744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.156 [2024-11-18 08:09:57.197032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.156 [2024-11-18 08:09:57.197096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.156 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.197344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.197408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.197670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.197737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.198027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.198090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.198346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.198409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.198672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.198737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.199021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.199084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.199334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.199397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.199722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.199788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.199995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.200058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.200353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.200416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.200686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.200753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.201025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.201090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.201348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.201411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.201679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.201745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.202049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.202113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.202359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.202425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.202700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.202765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.203058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.203122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.203360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.203426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.203707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.203772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.204074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.204138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.204353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.204416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.204632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.204696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.204945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.205019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.205265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.205331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.442 [2024-11-18 08:09:57.205607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.442 [2024-11-18 08:09:57.205672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.442 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.205917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.205980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.206170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.206235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.206473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.206552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.206801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.206866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.207115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.207182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.207428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.207523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.207795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.207859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.208091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.208155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.208450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.208528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.208785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.208849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.209100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.209164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.209466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.209545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.209845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.209909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.210155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.210221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.210526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.210592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.210857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.210922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.211138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.211201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.211508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.211573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.211863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.211928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.212241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.212305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.212543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.212609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.212908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.212971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.213242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.213306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.213555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.213620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.213877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.213941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.214158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.214222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.214517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.214582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.214828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.214892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.215094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.215161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.215405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.215472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.215730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.215795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.215987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.216051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.216300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.216364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.216586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.216652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.216837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.216902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.217159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.217224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.217474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.443 [2024-11-18 08:09:57.217555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.443 qpair failed and we were unable to recover it. 00:36:04.443 [2024-11-18 08:09:57.217798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.217872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.218100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.218165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.218417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.218482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.218731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.218794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.219081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.219146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.219373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.219439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.219756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.219820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.220063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.220128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.220417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.220481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.220712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.220777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.221059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.221123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.221421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.221486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.221787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.221852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.222054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.222117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.222370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.222435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.222635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.222702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.222948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.223011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.223302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.223365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.223622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.223688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.223982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.224045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.224293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.224356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.224574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.224640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.224863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.224928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.225210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.225274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.225534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.225600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.225848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.225912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.226159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.226226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.226518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.226584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.226837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.226901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.227147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.227212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.227437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.227529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.227786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.227852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.228048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.228113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.228403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.228465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.228732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.228796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.229088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.229153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.444 [2024-11-18 08:09:57.229407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.444 [2024-11-18 08:09:57.229470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.444 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.229774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.229838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.230058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.230123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.230415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.230479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.230786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.230860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.231112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.231176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.231428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.231505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.231779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.231843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.232096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.232162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.232481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.232559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.232846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.232911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.233127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.233191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.233505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.233571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.233788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.233852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.234040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.234104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.234298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.234363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.234574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.234642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.234867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.234931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.235148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.235213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.235448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.235537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.235785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.235852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.236133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.236197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.236444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.236522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.236808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.236873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.237070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.237135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.237380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.237444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.237712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.237780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.237985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.238051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.238306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.238370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.238630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.238696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.238946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.239010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.239273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.239339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.239630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.239695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.239997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.240062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.240319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.240384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.240648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.240712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.240961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.241025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.241273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.445 [2024-11-18 08:09:57.241337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.445 qpair failed and we were unable to recover it. 00:36:04.445 [2024-11-18 08:09:57.241588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.241653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.241956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.242021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.242232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.242299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.242590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.242656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.242912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.242977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.243265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.243328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.243572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.243647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.243912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.243976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.244181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.244247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.244547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.244613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.244861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.244926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.245214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.245277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.245561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.245626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.245829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.245893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.246146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.246209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.246506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.246571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.246860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.246924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.247222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.247285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.247536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.247602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.247848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.247915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.248195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.248259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.248547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.248613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.248870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.248935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.249222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.249286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.249535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.249601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.249840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.249906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.250192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.250257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.250464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.250540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.250802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.250866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.251132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.251196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.251443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.251519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.251763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.446 [2024-11-18 08:09:57.251828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.446 qpair failed and we were unable to recover it. 00:36:04.446 [2024-11-18 08:09:57.252102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.252167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.252469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.252548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.252761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.252826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.253117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.253182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.253485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.253562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.253855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.253918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.254210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.254274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.254566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.254632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.254829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.254894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.255198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.255262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.255527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.255592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.255840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.255906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.256111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.256180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.256425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.256503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.256803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.256883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.257128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.257193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.257458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.257536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.257824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.257888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.258097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.258163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.258423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.258487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.258756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.258821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.259065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.259131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.259414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.259479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.259799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.259863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.260055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.260123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.260332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.260396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.260660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.260725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.261022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.261086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.261386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.261449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.261730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.261794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.262042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.262108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.262350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.262414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.262722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.262788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.263028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.263092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.263378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.263441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.263696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.263760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.264006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.264070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.447 qpair failed and we were unable to recover it. 00:36:04.447 [2024-11-18 08:09:57.264308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.447 [2024-11-18 08:09:57.264375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.264578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.264644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.264935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.264999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.265297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.265360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.265677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.265743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.266006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.266069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.266293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.266356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.266616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.266682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.266944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.267007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.267255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.267322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.267591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.267657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.267889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.267952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.268253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.268316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.268607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.268673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.268924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.268987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.269219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.269283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.269572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.269637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.269891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.269965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.270225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.270289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.270582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.270646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.270938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.271001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.271233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.271297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.271513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.271580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.271826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.271890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.272135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.272199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.272446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.272522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.272772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.272837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.273081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.273155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.273447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.273542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.273800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.273864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.274145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.274209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.274481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.274561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.274773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.274840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.275083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.275149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.275441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.275519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.275747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.275812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.276101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.276164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.276423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.448 [2024-11-18 08:09:57.276487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.448 qpair failed and we were unable to recover it. 00:36:04.448 [2024-11-18 08:09:57.276694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.276755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.277000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.277060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.277315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.277376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.277618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.277683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.277934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.277999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.278229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.278296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.278568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.278635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.278882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.278945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.279231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.279295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.279589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.279654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.279858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.279921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.280221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.280285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.280531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.280596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.280788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.280854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.281064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.281129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.281422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.281488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.281745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.281809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.282031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.282095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.282382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.282447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.282710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.282783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.282991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.283057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.283274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.283339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.283638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.283703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.283967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.284031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.284331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.284395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.284653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.284720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.285010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.285076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.285327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.285392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.285703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.285768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.286026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.286090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.286347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.286411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.286716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.286781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.286983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.287046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.287361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.287425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.287702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.287767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.288002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.288066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.288326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.288390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.449 [2024-11-18 08:09:57.288696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.449 [2024-11-18 08:09:57.288762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.449 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.289051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.289116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.289407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.289471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.289679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.289747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.289957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.290022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.290315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.290382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.290702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.290768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.291066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.291130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.291398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.291462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.291785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.291850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.292122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.292186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.292441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.292522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.292769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.292833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.293089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.293153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.293450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.293540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.293791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.293858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.294155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.294219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.294453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.294538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.294829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.294892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.295115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.295179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.295427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.295510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.295772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.295836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.296082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.296158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.296456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.296539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.296788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.296852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.297098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.297162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.297468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.297550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.297837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.297901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.298192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.298256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.298472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.298552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.298791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.298856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.299077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.299141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.299384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.299447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.299713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.299780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.300078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.300143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.300439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.300520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.300785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.300849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.301103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.450 [2024-11-18 08:09:57.301167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.450 qpair failed and we were unable to recover it. 00:36:04.450 [2024-11-18 08:09:57.301371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.301433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.301700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.301766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.302057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.302120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.302376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.302440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.302662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.302726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.303011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.303077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.303383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.303447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.303720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.303784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.304020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.304083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.304338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.304403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.304670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.304735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.304964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.305032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.305313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.305378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.305682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.305747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.306008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.306073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.306372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.306436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.306740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.306805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.307051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.307115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.307356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.307419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.307730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.307795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.308108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.308172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.308358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.308421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.308683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.308751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.309004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.309072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.309340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.309404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.309699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.309766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.310029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.310093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.310346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.310410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.310698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.310764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.311063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.311127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.311367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.311432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.311733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.311798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.312056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.451 [2024-11-18 08:09:57.312120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.451 qpair failed and we were unable to recover it. 00:36:04.451 [2024-11-18 08:09:57.312367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.312432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.312653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.312718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.312934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.312997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.313248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.313312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.313613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.313681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.313915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.313979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.314268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.314332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.314550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.314618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.314852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.314916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.315208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.315272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.315560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.315624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.315868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.315932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.316180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.316247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.316503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.316569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.316862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.316926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.317213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.317278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.317537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.317602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.317900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.317965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.318261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.318334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.318584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.318652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 896215 Killed "${NVMF_APP[@]}" "$@" 00:36:04.452 [2024-11-18 08:09:57.318911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.318976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.319235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.319298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.319564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.319629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.319879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:04.452 [2024-11-18 08:09:57.319946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.320239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.320305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:04.452 [2024-11-18 08:09:57.320573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.320635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.320882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.320950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:04.452 [2024-11-18 08:09:57.321209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.321274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:04.452 [2024-11-18 08:09:57.321544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.321610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.321906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.452 [2024-11-18 08:09:57.321988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.322296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.322360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.322608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.322671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.322981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.323045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.323289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.323356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.323606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.323642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.452 qpair failed and we were unable to recover it. 00:36:04.452 [2024-11-18 08:09:57.323794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.452 [2024-11-18 08:09:57.323829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.323993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.324028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.324173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.324210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.324421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.324485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.324794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.324829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.324978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.325014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.325204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.325239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.325399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.325440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.325625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.325660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.325813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.325848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.325966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.326001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.326248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.326314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.326597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.326634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.326741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.326774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=896763 00:36:04.453 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:04.453 [2024-11-18 08:09:57.327041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 896763 00:36:04.453 [2024-11-18 08:09:57.327103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 896763 ']' 00:36:04.453 [2024-11-18 08:09:57.327355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:04.453 [2024-11-18 08:09:57.327420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:04.453 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:04.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:04.453 [2024-11-18 08:09:57.327683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.327722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:04.453 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.453 [2024-11-18 08:09:57.327979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.328046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.328978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.329012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.329164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.329194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.329298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.329326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.329424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.329454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.329567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.329596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.329741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.329789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.329911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.329966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.330097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.330125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.330220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.330251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.330383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.330410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.330505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.330534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.330638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.330674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.330841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.330869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.330989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.331019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.331106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.331134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.453 [2024-11-18 08:09:57.331255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.453 [2024-11-18 08:09:57.331289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.453 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.331428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.331456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.331616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.331672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.331806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.331865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.331957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.331985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.332120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.332156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.332284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.332311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.332427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.332461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.332578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.332607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.332758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.332787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.332941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.332969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.333098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.333133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.333267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.333296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.333418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.333452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.333567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.333596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.333738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.333767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.333870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.333898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.334018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.334046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.334140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.334174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.334280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.334309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.334420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.334448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.334556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.334586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.334711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.334739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.334871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.334900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.335008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.335037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.335134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.335162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.335300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.335329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.335481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.335518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.335620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.335650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.335745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.335773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.335921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.335950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.336057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.336085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.336181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.336209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.336338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.336366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.336519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.336547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.336671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.336701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.336822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.336854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.454 qpair failed and we were unable to recover it. 00:36:04.454 [2024-11-18 08:09:57.336986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.454 [2024-11-18 08:09:57.337015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.337160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.337187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.337323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.337357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.337464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.337502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.337601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.337628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.337724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.337753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.337843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.337871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.337990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.338022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.338130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.338158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.338281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.338312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.338471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.338507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.338606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.338633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.338764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.338804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.338961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.338990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.339101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.339130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.339281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.339312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.339408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.339435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.339535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.339564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.339658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.339685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.339795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.339823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.339952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.339981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.340102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.340131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.340262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.340290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.340411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.340440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.340567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.340596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.340690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.340720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.340858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.340886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.340977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.341004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.341135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.341164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.341258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.341287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.341397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.341432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.341552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.341581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.341697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.341732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.341841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.341868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.341990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.342022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.342117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.342146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.342265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.342294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.342422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.455 [2024-11-18 08:09:57.342451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.455 qpair failed and we were unable to recover it. 00:36:04.455 [2024-11-18 08:09:57.342553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.342582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.342709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.342745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.342850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.342878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.343000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.343031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.343168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.343196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.343305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.343355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.343535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.343577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.343706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.343746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.343889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.343918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.344031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.344058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.344172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.344206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.344334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.344363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.344465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.344504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.344611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.344640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.344737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.344766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.344898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.344927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.345055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.345082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.345189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.345225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.345372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.345401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.345515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.345544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.345649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.345677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.345814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.345842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.345973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.346002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.346127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.346155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.346304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.346333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.346459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.346486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.346620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.346654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.346805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.346833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.346940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.346969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.347056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.347084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.347232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.347263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.347364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.347391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.347485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.347520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.347639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.456 [2024-11-18 08:09:57.347682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.456 qpair failed and we were unable to recover it. 00:36:04.456 [2024-11-18 08:09:57.347790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.347820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.347915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.347945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.348044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.348072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.348202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.348230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.348316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.348345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.348440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.348470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.348638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.348686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.348823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.348878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.349018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.349070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.349169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.349198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.349303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.349332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.349471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.349509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.349628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.349676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.349774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.349802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.349934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.349967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.350093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.350122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.350269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.350297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.350403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.350434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.350562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.350604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.350738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.350768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.350871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.350900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.351001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.351030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.351152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.351181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.351274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.351302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.351428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.351459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.351599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.351645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.351775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.351822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.351966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.351999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.352164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.352197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.352334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.352366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.352461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.352500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.352619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.352647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.352744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.352772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.352886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.352914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.353074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.353109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.353271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.353303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.353420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.353450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.457 [2024-11-18 08:09:57.353603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.457 [2024-11-18 08:09:57.353632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.457 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.353721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.353766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.353894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.353924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.354016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.354046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.354174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.354205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.354352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.354380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.354473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.354508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.354606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.354634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.354747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.354778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.354878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.354908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.355008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.355044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.355209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.355258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.355381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.355412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.355567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.355596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.355695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.355723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.355829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.355860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.355958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.355989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.356120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.356150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.356283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.356313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.356430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.356459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.356587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.356618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.356718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.356747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.356863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.356894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.357018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.357048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.357156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.357187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.357288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.357318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.357475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.357516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.357631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.357661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.357759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.357787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.357910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.357938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.358053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.358082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.358182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.358211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.358336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.358365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.358511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.358540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.358633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.358660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.358754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.358783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.358893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.358922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.359082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.359111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.458 qpair failed and we were unable to recover it. 00:36:04.458 [2024-11-18 08:09:57.359223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.458 [2024-11-18 08:09:57.359251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.359406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.359434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.359560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.359589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.359709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.359739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.359862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.359890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.359978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.360006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.360089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.360117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.360245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.360274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.360360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.360388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.360502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.360548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.360643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.360672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.360775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.360804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.360908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.360943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.361093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.361122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.361214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.361243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.361342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.361372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.361539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.361570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.361673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.361702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.361810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.361841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.361970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.361999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.362098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.362129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.362228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.362259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.362381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.362410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.362563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.362592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.362676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.362705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.362800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.362828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.362957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.362986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.363099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.363127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.363222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.363250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.363372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.363400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.363510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.363540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.363646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.363675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.363761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.363791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.363911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.363940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.364063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.364091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.364190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.364218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.459 qpair failed and we were unable to recover it. 00:36:04.459 [2024-11-18 08:09:57.364340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.459 [2024-11-18 08:09:57.364368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.364498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.364529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.364673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.364723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.364840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.364871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.364972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.365006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.365105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.365135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.365259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.365287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.365392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.365421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.365516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.365545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.365674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.365709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.365802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.365830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.365915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.365942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.366099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.366135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.366260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.366288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.366429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.366472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.366598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.366627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.366716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.366749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.366867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.366894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.367018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.367045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.367167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.367195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.367286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.367314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.367434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.367462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.367566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.367594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.367684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.367711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.367803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.367830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.367954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.367983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.368104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.368132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.368219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.368247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.368355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.368383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.368504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.368536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.368674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.368702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.368798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.368827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.368918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.368945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.369066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.460 [2024-11-18 08:09:57.369094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.460 qpair failed and we were unable to recover it. 00:36:04.460 [2024-11-18 08:09:57.369210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.369237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.369322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.369349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.369444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.369471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.369578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.369605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.369691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.369718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.369799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.369827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.369941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.369968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.370066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.370096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.370213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.370240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.370360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.370389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.370484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.370537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.370651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.370678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.370772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.370803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.370900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.370929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.371063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.371089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.371228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.371256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.371372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.371399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.371500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.371527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.371619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.371645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.371731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.371757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.371873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.371899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.372019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.372046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.372135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.372167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.372280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.372307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.372414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.372441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.372535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.372562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.372652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.372679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.372797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.372823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.372909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.372936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.373024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.373050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.373168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.373196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.373317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.373347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.373443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.373470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.373569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.373597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.373683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.373711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.373825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.373851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.373949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.373976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.461 qpair failed and we were unable to recover it. 00:36:04.461 [2024-11-18 08:09:57.374073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.461 [2024-11-18 08:09:57.374064] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:36:04.462 [2024-11-18 08:09:57.374101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.374136] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:04.462 [2024-11-18 08:09:57.374220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.374246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.374336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.374362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.374479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.374513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.374637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.374663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.374803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.374829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.374940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.374966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.375081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.375107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.375229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.375260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.375385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.375411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.375508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.375535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.375626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.375653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.375770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.375797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.375897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.375923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.376009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.376036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.376122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.376151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.376242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.376275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.376385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.376413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.376501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.376529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.376607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.376633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.376720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.376747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.376878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.376907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.377023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.377050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.377149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.377177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.377272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.377303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.377417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.377451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.377607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.377634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.377726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.377758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.377849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.377876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.378015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.378043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.378152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.378178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.378285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.378311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.378396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.378423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.378538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.378566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.378680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.378706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.378796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.378822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.378909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.462 [2024-11-18 08:09:57.378935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.462 qpair failed and we were unable to recover it. 00:36:04.462 [2024-11-18 08:09:57.379050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.379077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.379197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.379226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.379351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.379380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.379478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.379512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.379631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.379660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.379772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.379811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.379933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.379962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.380051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.380079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.380192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.380219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.380364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.380391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.380477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.380512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.380656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.380682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.380799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.380826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.380911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.380939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.381032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.381059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.381173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.381199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.381312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.381338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.381428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.381456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.381552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.381580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.381695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.381722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.381810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.381840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.381935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.381962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.382046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.382072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.382166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.382194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.382307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.382334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.382419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.382446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.382569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.382596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.382678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.382709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.382799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.382825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.382910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.382937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.383051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.383076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.383190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.383216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.383303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.383332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.383448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.383475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.383605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.383635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.383754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.383781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.383868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.383895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.463 [2024-11-18 08:09:57.383977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.463 [2024-11-18 08:09:57.384003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.463 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.384149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.384176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.384261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.384287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.384411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.384437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.384535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.384563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.384650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.384677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.384799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.384826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.384916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.384943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.385070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.385096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.385210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.385237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.385325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.385352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.385435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.385462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.385553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.385581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.385673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.385699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.385790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.385816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.385895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.385920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.386046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.386074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.386211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.386237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.386325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.386352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.386472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.386504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.386592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.386620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.386736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.386764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.386912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.386941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.387053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.387079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.387161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.387187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.387297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.387323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.387437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.387464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.387583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.387613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.387699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.387725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.387820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.387848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.387965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.387995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.388127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.388155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.388234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.388261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.388379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.388405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.388500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.388527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.388638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.388664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.388741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.464 [2024-11-18 08:09:57.388767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.464 qpair failed and we were unable to recover it. 00:36:04.464 [2024-11-18 08:09:57.388854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.388881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.389000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.389026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.389147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.389174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.389264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.389291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.389406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.389432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.389518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.389544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.389659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.389685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.389806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.389832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.389949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.389975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.390059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.390086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.390208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.390234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.390324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.390350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.390433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.390459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.390561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.390587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.390696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.390721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.390827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.390854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.390941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.390966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.391047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.391073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.391163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.391204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.391305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.391332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.391428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.391457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.391588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.391616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.391762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.391788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.391908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.391935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.392051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.392079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.392202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.392227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.392309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.392338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.392467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.392505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.392624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.392651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.392746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.392778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.392909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.392936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.393024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.465 [2024-11-18 08:09:57.393052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.465 qpair failed and we were unable to recover it. 00:36:04.465 [2024-11-18 08:09:57.393218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.393245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.393357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.393391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.393494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.393522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.393606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.393632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.393718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.393745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.393860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.393886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.394003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.394031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.394119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.394145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.394265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.394297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.394387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.394413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.394529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.394559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.394696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.394723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.394806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.394832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.394938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.394965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.395077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.395103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.395199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.395228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.395369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.395395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.395510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.395537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.395623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.395649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.395758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.395784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.395897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.395924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.396011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.396038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.396141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.396168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.396279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.396305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.396399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.396428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.396540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.396580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.396713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.396757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.396888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.396915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.397004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.397031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.397132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.397162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.397250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.397277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.397390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.397417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.397533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.397560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.397651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.397678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.397772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.397799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.397941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.397967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.398082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.398108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.466 qpair failed and we were unable to recover it. 00:36:04.466 [2024-11-18 08:09:57.398239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.466 [2024-11-18 08:09:57.398267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.398413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.398442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.398566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.398593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.398719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.398746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.398833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.398864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.398965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.398995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.399089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.399116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.399231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.399257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.399368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.399403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.399511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.399540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.399633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.399660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.399740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.399767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.399858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.399885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.400036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.400062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.400148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.400175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.400288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.400314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.400402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.400428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.400549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.400576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.400696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.400722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.400817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.400843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.400986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.401012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.401127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.401154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.401239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.401265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.401410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.401438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.401547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.401586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.401729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.401767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.401896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.401924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.402044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.402071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.402160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.402186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.402301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.402329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.402476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.402516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.402618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.402657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.402757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.402786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.402903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.402931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.403018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.403045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.403131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.403157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.403250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.403278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.403381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.467 [2024-11-18 08:09:57.403420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.467 qpair failed and we were unable to recover it. 00:36:04.467 [2024-11-18 08:09:57.403571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.403600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.403694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.403721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.403803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.403829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.403958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.403984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.404073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.404100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.404218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.404246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.404343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.404377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.404468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.404502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.404593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.404621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.404707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.404734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.404844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.404872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.404961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.404989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.405110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.405138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.405259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.405287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.405402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.405428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.405516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.405542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.405630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.405656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.405745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.405771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.405911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.405937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.406016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.406042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.406141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.406170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.406256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.406283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.406365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.406391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.406524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.406550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.406659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.406698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.406800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.406828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.406919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.406945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.407033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.407059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.407135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.407161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.407301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.407326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.407415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.407440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.407534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.407560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.407675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.407701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.407813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.407843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.407953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.407978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.408096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.408121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.408265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.408293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.408406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.468 [2024-11-18 08:09:57.408432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.468 qpair failed and we were unable to recover it. 00:36:04.468 [2024-11-18 08:09:57.408548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.408576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.408665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.408691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.408799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.408825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.408906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.408932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.409021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.409047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.409159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.409184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.409275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.409303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.409386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.409412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.409497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.409523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.409640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.409666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.409753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.409779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.409891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.409918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.410035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.410062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.410141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.410167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.410293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.410331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.410451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.410478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.410575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.410601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.410713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.410739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.410863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.410889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.411005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.411034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.411126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.411152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.411236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.411262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.411347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.411376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.411461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.411486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.411607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.411632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.411717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.411742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.411829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.411855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.411966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.411991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.412075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.412101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.412192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.412220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.412315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.412354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.412475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.412508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.412600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.412627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.412717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.412743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.412886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.412913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.412997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.413024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.413114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.413141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.469 qpair failed and we were unable to recover it. 00:36:04.469 [2024-11-18 08:09:57.413241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.469 [2024-11-18 08:09:57.413280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.413375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.413402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.413518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.413545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.413638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.413665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.413755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.413781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.413892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.413918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.414006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.414032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.414157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.414196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.414315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.414343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.414437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.414463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.414565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.414591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.414737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.414763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.414884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.414912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.414998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.415025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.415146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.415173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.415261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.415288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.415372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.415398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.415514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.415540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.415627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.415653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.415767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.415793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.415890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.415916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.416008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.416034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.416117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.416143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.416262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.416291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.416391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.416429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.416532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.416565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.416683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.416709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.416846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.416872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.416984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.417009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.417130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.417158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.417274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.417301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.417428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.417467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.417570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.417596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.417697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.417723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.470 [2024-11-18 08:09:57.417811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.470 [2024-11-18 08:09:57.417836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.470 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.417930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.417956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.418076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.418103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.418199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.418227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.418341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.418367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.418512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.418540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.418633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.418659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.418746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.418771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.418854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.418881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.418972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.419000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.419121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.419148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.419262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.419290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.419375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.419401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.419534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.419561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.419679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.419705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.419827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.419853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.419944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.419970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.420056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.420082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.420175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.420211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.420326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.420352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.420439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.420464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.420565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.420592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.420687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.420724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.420831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.420870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.420964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.420992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.421106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.421133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.421274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.421300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.421394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.421420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.421522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.421551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.421672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.421698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.421812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.421838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.421927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.421955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.422051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.422076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.422220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.422248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.422330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.422357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.422462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.422488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.422579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.422606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.422691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.422717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.471 qpair failed and we were unable to recover it. 00:36:04.471 [2024-11-18 08:09:57.422798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.471 [2024-11-18 08:09:57.422825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.422913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.422939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.423056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.423082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.423178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.423208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.423325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.423352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.423442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.423470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.423565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.423591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.423721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.423760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.423848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.423875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.424019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.424045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.424153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.424179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.424253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.424279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.424365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.424391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.424510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.424535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.424618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.424643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.424751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.424777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.424856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.424881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.424989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.425015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.425126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.425151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.425271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.425299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.425413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.425441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.425542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.425571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.425659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.425685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.425771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.425798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.425939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.425965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.426109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.426135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.426251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.426279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.426400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.426428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.426531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.426559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.426677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.426703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.426792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.426818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.426933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.426960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.427051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.427078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.427165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.427193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.427285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.427312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.427454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.427481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.427585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.427613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.427702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.472 [2024-11-18 08:09:57.427728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.472 qpair failed and we were unable to recover it. 00:36:04.472 [2024-11-18 08:09:57.427824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.427849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.427964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.427990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.428081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.428107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.428187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.428214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.428327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.428354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.428447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.428486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.428587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.428615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.428704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.428733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.428852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.428878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.428967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.428999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.429118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.429144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.429229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.429256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.429348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.429376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.429543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.429582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.429684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.429711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.429825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.429850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.429959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.429984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.430095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.430123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.430207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.430235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.430332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.430371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.430452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.430480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.430598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.430625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.430734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.430761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.430878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.430904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.430991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.431020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.431111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.431139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.431254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.431281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.431369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.431395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.431507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.431533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.431647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.431673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.431750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.431775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.431864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.431890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.431980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.432006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.432124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.432151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.432239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.432267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.432356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.432382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.432472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.432506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.473 [2024-11-18 08:09:57.432623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.473 [2024-11-18 08:09:57.432649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.473 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.432777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.432817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.432912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.432941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.433021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.433047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.433130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.433157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.433247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.433274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.433386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.433413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.433505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.433532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.433620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.433645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.433736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.433764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.433882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.433908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.434025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.434051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.434166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.434192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.434322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.434348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.434434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.434463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.434563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.434590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.434681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.434707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.434822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.434848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.434936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.434962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.435048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.435077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.435190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.435217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.435301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.435327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.435414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.435440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.435559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.435586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.435699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.435725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.435811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.435837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.435930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.435956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.436043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.436083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.436175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.436202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.436289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.436316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.436405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.436432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.436549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.436576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.436658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.436684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.474 qpair failed and we were unable to recover it. 00:36:04.474 [2024-11-18 08:09:57.436768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.474 [2024-11-18 08:09:57.436794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.436906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.436932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.437022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.437049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.437166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.437193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.437340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.437366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.437453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.437479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.437571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.437603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.437721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.437747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.437836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.437861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.437976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.438003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.438095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.438126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.438245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.438271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.438363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.438390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.438513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.438555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.438652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.438679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.438770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.438795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.438884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.438911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.438990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.439017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.439108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.439134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.439248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.439273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.439418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.439445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.439536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.439562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.439661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.439689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.439803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.439829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.439947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.439973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.440048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.440074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.440155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.440182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.440337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.440377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.440473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.440507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.440600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.440627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.440736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.440761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.440871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.440897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.441013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.441039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.441158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.441185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.441268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.441295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.441431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.441457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.441537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.441564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.475 [2024-11-18 08:09:57.441655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.475 [2024-11-18 08:09:57.441681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.475 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.441785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.441811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.441897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.441923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.442024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.442063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.442192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.442219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.442334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.442361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.442454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.442480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.442580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.442606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.442720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.442746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.442858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.442888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.443030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.443056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.443141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.443169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.443280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.443306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.443449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.443475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.443568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.443594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.443679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.443705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.443809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.443834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.443952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.443979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.444093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.444119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.444228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.444254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.444339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.444364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.444445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.444470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.444562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.444589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.444705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.444731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.444844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.444870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.444961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.444988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.445070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.445097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.445187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.445216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.445313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.445351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.445467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.445502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.445617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.445643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.445722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.445747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.445863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.445890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.445980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.446008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.446094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.446122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.446200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.446227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.446319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.446345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.446430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.446456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.476 qpair failed and we were unable to recover it. 00:36:04.476 [2024-11-18 08:09:57.446575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.476 [2024-11-18 08:09:57.446601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.446687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.446713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.446823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.446848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.446954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.446982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.447069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.447095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.447178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.447205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.447282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.447308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.447415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.447441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.447525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.447552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.447636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.447663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.447781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.447807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.447891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.447918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.448064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.448090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.448181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.448209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.448297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.448323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.448467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.448499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.448596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.448621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.448707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.448733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.448813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.448840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.448924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.448950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.449030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.449056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.449138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.449164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.449291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.449330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.449444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.449472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.449559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.449585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.449709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.449735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.449870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.449896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.449979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.450005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.450091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.450118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.450202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.450228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.450344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.450369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.450448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.450474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.450564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.450590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.450687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.450726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.450815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.450842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.450926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.450951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.451065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.451090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.451175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.477 [2024-11-18 08:09:57.451201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.477 qpair failed and we were unable to recover it. 00:36:04.477 [2024-11-18 08:09:57.451315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.451346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.451432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.451457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.451560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.451586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.451723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.451749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.451866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.451894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.451982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.452008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.452115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.452142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.452223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.452250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.452329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.452355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.452438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.452464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.452557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.452585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.452673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.452698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.452786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.452812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.452909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.452935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.453081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.453108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.453188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.453214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.453292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.453318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.453398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.453423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.453535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.453562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.453673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.453699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.453786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.453814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.453898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.453924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.454044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.454069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.454175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.454201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.454282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.454308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.454383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.454409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.454505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.454532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.454621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.454648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.454742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.454769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.454854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.454881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.454961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.454987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.455076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.455103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.455188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.455215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.455340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.455378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.455506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.455535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.455622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.455648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.455757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.455783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.455860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.478 [2024-11-18 08:09:57.455886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.478 qpair failed and we were unable to recover it. 00:36:04.478 [2024-11-18 08:09:57.455963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.455989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.456108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.456134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.456211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.456244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.456322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.456348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.456437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.456463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.456562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.456592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.456696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.456734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.456826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.456853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.456963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.456988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.457106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.457131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.457213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.457214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:04.479 [2024-11-18 08:09:57.457239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.457350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.457375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.457461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.457486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.457579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.457604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.457714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.457739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.457845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.457870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.457960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.457986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.458074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.458099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.458190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.458216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.458301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.458326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.458422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.458446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.458544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.458569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.458696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.458735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.458855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.458884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.458998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.459024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.459133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.459159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.459277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.459303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.459445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.459471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.459561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.459587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.459718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.459757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.459851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.459878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.459992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.460017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.460100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.460126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.479 qpair failed and we were unable to recover it. 00:36:04.479 [2024-11-18 08:09:57.460203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.479 [2024-11-18 08:09:57.460229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.460316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.460341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.460449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.460474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.460601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.460627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.460713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.460738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.460827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.460853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.460957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.460996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.461117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.461144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.461234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.461260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.461345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.461372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.461502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.461529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.461625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.461651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.461732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.461758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.461845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.461873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.462016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.462042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.462161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.462187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.462304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.462332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.462461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.462506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.462609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.462636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.462728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.462756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.462844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.462871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.462990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.463016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.463109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.463137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.463234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.463259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.463344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.463369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.463486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.463523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.463609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.463636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.463725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.463751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.463842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.463868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.463948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.463973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.464081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.464108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.464191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.464217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.464313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.464338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.464445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.464471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.464598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.464625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.464718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.464744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.464863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.464894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.464986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.465011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.480 qpair failed and we were unable to recover it. 00:36:04.480 [2024-11-18 08:09:57.465132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.480 [2024-11-18 08:09:57.465158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.465233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.465259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.465366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.465392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.465534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.465560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.465647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.465672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.465760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.465786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.465871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.465896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.465990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.466016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.466136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.466162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.466280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.466306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.466389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.466415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.466506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.466531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.466631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.466657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.466799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.466824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.466937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.466962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.467102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.467128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.467218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.467244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.467326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.467352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.467432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.467457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.467590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.467617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.467734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.467761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.467875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.467901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.467985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.468012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.468107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.468133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.468273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.468299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.468390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.468416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.468537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.468564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.468646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.468671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.468746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.468771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.468853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.468880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.468973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.469000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.469090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.469116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.469194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.469220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.469300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.469327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.469426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.469466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.481 [2024-11-18 08:09:57.469585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.469640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.469771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.469832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.469962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.481 [2024-11-18 08:09:57.470008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 qpair failed and we were unable to recover it. 00:36:04.481 [2024-11-18 08:09:57.470159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.470197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.470339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.470364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.470504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.470530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.470621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.470646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.470737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.470769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.470904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.470938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.471057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.471084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.471170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.471196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.471306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.471333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.471427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.471454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.471585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.471612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.471729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.471755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.471873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.471899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.472016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.472042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.472161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.472188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.472333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.472359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.472445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.472471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.472570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.472597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.472686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.472712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.472802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.472828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.472916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.472942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.473060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.473086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.473178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.473204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.473291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.473317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.473401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.473427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.473511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.473538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.473636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.473662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.473776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.473808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.473921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.473948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.474062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.474089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.474169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.474196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.474338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.474364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.474478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.474509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.474598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.474625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.474720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.474746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.474825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.474851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.474963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.474990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.482 [2024-11-18 08:09:57.475110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.482 [2024-11-18 08:09:57.475137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.482 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.475220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.475247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.475336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.475362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.475448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.475474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.475602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.475629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.475716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.475742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.475837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.475864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.475953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.475980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.476067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.476093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.476203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.476229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.476315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.476342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.476421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.476448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.476549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.476577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.476656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.476683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.476769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.476796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.476937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.476963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.477045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.477071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.477164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.477191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.477290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.477333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.477456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.477484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.477587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.477614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.477699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.477725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.477811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.477837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.477943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.477969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.478096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.478135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.478261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.478290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.478378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.478407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.478495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.478522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.478606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.478632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.478721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.478748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.478885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.478916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.479032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.479059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.479197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.479223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.479318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.483 [2024-11-18 08:09:57.479343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.483 qpair failed and we were unable to recover it. 00:36:04.483 [2024-11-18 08:09:57.479470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.479522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.479688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.479716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.479801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.479827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.479922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.479947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.480031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.480057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.480137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.480163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.480279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.480304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.480384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.480410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.480525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.480552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.480633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.480658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.480782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.480811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.480901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.480929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.481084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.481110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.481198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.481224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.481338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.481364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.481452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.481479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.481570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.481596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.481707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.481733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.481824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.481850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.481999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.482026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.482111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.482137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.482222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.482248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.482362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.482389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.482511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.482537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.482657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.482683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.482798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.482823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.482905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.482931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.483015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.483041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.483128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.483155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.483240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.483266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.483384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.483412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.483539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.483566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.483655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.483682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.483805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.483830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.483939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.483964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.484054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.484080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.484164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.484196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.484 [2024-11-18 08:09:57.484286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.484 [2024-11-18 08:09:57.484312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.484 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.484393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.484420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.484506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.484534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.484619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.484646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.484737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.484763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.484851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.484877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.484967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.484994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.485082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.485108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.485217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.485243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.485354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.485380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.485501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.485528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.485615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.485641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.485728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.485754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.485840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.485867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.485949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.485975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.486065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.486091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.486186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.486212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.486329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.486358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.486441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.486467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.486604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.486643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.486780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.486807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.486924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.486951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.487095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.487121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.487210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.487236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.487325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.487351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.487441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.487469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.487598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.487625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.487723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.487750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.487860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.487886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.487975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.488002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.488117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.488143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.488259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.488287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.488400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.488427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.488538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.488565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.488687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.488713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.488803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.488829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.488945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.488971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.489061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.489088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.489175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.489201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.489288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.489320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-11-18 08:09:57.489435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-11-18 08:09:57.489462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.489555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.489582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.489670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.489697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.489798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.489824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.489935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.489961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.490076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.490102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.490215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.490240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.490326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.490352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.490465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.490499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.490612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.490638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.490721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.490747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.490832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.490858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.490968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.490994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.491091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.491117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.491202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.491228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.491343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.491369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.491445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.491472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.491588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.491614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.491694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.491721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.491838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.491865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.491951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.491977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.492088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.492114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.492202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.492227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.492340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.492365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.492454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.492481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.492609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.492636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.492767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.492794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.492904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.492930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.493014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.493041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.493153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.493179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.493292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.493318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.493400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.493425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.493543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.493572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.493657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.493683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.493824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.493850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.493933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.493961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.494051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.494077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.494188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.494214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.494317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.494343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.494434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.494465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.494559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.494585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-11-18 08:09:57.494678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-11-18 08:09:57.494704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.494820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.494846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.494933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.494960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.495036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.495063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.495149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.495176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.495265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.495291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.495370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.495396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.495493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.495521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.495607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.495633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.495717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.495745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.495864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.495890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.495968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.495994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.496117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.496143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.496258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.496284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.496396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.496422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.496523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.496549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.496638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.496665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.496752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.496779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.496865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.496891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.497000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.497026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.497139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.497166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.497247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.497273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.497362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.497388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.497502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.497528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.497620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.497646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.497762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.497788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.497865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.497891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.497981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.498008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.498116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.498142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.498229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.498257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.498336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.498363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.498471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.498503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.498594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.498621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.498704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.498731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.498874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.498901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.498985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.499010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.499100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.499127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.499244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.499270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.499382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.499414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.499535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.499563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.499654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-11-18 08:09:57.499683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-11-18 08:09:57.499790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.499817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.499907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.499934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.500041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.500067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.500151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.500177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.500262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.500287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.500373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.500399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.500510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.500536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.500642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.500681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.500783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.500811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.500924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.500951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.501032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.501059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.501184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.501212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.501326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.501353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.501497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.501537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.501653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.501681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.501800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.501826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.501914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.501940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.502013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.502040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.502153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.502179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.502264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.502289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.502400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.502426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.502513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.502540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.502628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.502654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.502742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.502768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.502858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.502884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.502967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.502993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.503108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.503133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-11-18 08:09:57.503245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-11-18 08:09:57.503274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-11-18 08:09:57.503357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-11-18 08:09:57.503386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-11-18 08:09:57.503480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-11-18 08:09:57.503514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-11-18 08:09:57.503642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-11-18 08:09:57.503669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-11-18 08:09:57.503789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-11-18 08:09:57.503823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-11-18 08:09:57.503946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.773 [2024-11-18 08:09:57.503972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.773 qpair failed and we were unable to recover it. 00:36:04.773 [2024-11-18 08:09:57.504067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.504093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.504190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.504216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.504296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.504323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.504388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161d630 (9): Bad file descriptor 00:36:04.774 [2024-11-18 08:09:57.504544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.504585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.504684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.504712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.504797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.504823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.504926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.504952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.505053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.505079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.505181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.505208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.505295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.505321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.505416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.505441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.505552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.505581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.505688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.505716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.505821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.505856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.505943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.505969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.506064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.506097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.506196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.506223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.506324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.506358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.506463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.506502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.506584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.506611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.506695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.506722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.506813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.506839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.506930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.506956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.507039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.507066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.507150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.507176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.507257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.507285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.507364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.507391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.507495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.507524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.507609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.507635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.507721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.507747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.507839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:04.774 [2024-11-18 08:09:57.507862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.507873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:04.774 [2024-11-18 08:09:57.507890] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:04.774 [2024-11-18 08:09:57.507892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.774 [2024-11-18 08:09:57.507903] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.507914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:04.774 [2024-11-18 08:09:57.507976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.508001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.508089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.508114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.508227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.508252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.508368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.774 [2024-11-18 08:09:57.508394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.774 qpair failed and we were unable to recover it. 00:36:04.774 [2024-11-18 08:09:57.508481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.508513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.508600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.508626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.508707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.508733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.508852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.508878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.508992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.509018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.509133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.509158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.509247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.509274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.509399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.509425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.509537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.509565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.509540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:04.775 [2024-11-18 08:09:57.509667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.509594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:04.775 [2024-11-18 08:09:57.509639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:04.775 [2024-11-18 08:09:57.509696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.509637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:36:04.775 [2024-11-18 08:09:57.509819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.509846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.509938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.509962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.510057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.510082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.510175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.510200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.510311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.510337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.510423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.510449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.510543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.510569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.510649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.510676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.510790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.510816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.510907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.510933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.511011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.511037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.511110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.511136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.511225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.511251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.511336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.511362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.511449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.511475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.511562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.511588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.511691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.511731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.511820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.511848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.511932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.511959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.512047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.512078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.512187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.512213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.512298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.512324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.512417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.512448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.512555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.512595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.512684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.512712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.775 [2024-11-18 08:09:57.512795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.775 [2024-11-18 08:09:57.512820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.775 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.512914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.512941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.513057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.513083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.513170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.513195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.513279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.513305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.513450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.513475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.513577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.513603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.513686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.513712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.513796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.513820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.513905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.513930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.514040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.514065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.514147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.514173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.514259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.514285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.514360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.514386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.514519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.514558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.514692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.514721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.514808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.514836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.514925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.514957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.515057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.515086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.515166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.515192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.515284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.515310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.515411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.515436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.515533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.515559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.515647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.515673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.515762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.515792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.515886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.515912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.516025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.516052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.516171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.516198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.516283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.516310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.519569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.519612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.519729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.519757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.519900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.519926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.520007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.520033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.520144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.520169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.520256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.520282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.520371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.520397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.520484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.520517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.520609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.520634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-11-18 08:09:57.520755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.776 [2024-11-18 08:09:57.520781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.520860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.520886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.520968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.520994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.521127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.521167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.521263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.521290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.521400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.521441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.521558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.521588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.521681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.521709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.521803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.521829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.521945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.521971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.522064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.522090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.522181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.522208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.522303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.522329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.522431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.522485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.522628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.522656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.522740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.522766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.522848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.522874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.522991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.523018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.523109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.523136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.523291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.523330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.523456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.523498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.523586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.523615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.523706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.523731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.523829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.523855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.523935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.523961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.524047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.524075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.524155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.524183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.524274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.524301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.524415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.524440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.524540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.524566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.524649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.524675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.524763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.524788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.524863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.524891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.524981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.525009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.525123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.525149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.525227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.525256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.525346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.525373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.525502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.777 [2024-11-18 08:09:57.525529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-11-18 08:09:57.525602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.525629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.525722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.525750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.525884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.525910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.526028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.526056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.526150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.526176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.526293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.526320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.526434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.526461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.526569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.526597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.526707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.526746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.526864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.526890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.526987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.527013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.527088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.527114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.527193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.527218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.527308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.527333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.527439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.527464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.527556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.527587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.527705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.527730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.527853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.527878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.527958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.527984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.528096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.528124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.528202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.528228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.528341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.528368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.528449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.528486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.528605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.528632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.528731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.528757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.528849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.528875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.528990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.529016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.529129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.529155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.529273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.529298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.778 qpair failed and we were unable to recover it. 00:36:04.778 [2024-11-18 08:09:57.529380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.778 [2024-11-18 08:09:57.529406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.529499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.529524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.529605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.529630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.529739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.529764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.529849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.529874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.529949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.529975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.530058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.530082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.530181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.530220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.530308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.530337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.530437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.530477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.530603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.530630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.530725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.530750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.530841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.530866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.530950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.530980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.531069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.531094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.531175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.531200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.531287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.531314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.531401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.531426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.531529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.531555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.531672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.531697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.531795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.531821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.531903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.531929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.532014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.532040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.532124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.532150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.532266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.532291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.532373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.532398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.532496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.532522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.532642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.532668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.532751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.532776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.532861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.532888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.532972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.532996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.533081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.533106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.533197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.533222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.533306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.533331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.533471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.533501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.533604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.533643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.533742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.533790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.779 qpair failed and we were unable to recover it. 00:36:04.779 [2024-11-18 08:09:57.533908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.779 [2024-11-18 08:09:57.533936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.534048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.534075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.534159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.534186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.534282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.534314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.534404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.534430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.534528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.534556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.534650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.534676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.534803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.534829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.534915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.534941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.535048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.535074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.535157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.535184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.535282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.535311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.535409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.535436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.535540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.535568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.535648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.535674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.535760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.535787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.535874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.535901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.536021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.536048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.536130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.536156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.536234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.536260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.536379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.536407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.536507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.536533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.536619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.536645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.536727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.536753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.536836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.536862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.536941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.536966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.537082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.537107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.537203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.537243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.537331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.537360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.537446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.537473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.537562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.537593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.537679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.537705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.537801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.537826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.537945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.537972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.538057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.538082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.538183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.538208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.538292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.538317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.538423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.780 [2024-11-18 08:09:57.538449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.780 qpair failed and we were unable to recover it. 00:36:04.780 [2024-11-18 08:09:57.538544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.538573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.538662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.538688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.538790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.538816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.538898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.538923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.539002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.539029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.539130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.539169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.539268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.539294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.539407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.539435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.539523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.539549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.539633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.539660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.539746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.539773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.539861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.539887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.539965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.539990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.540106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.540132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.540220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.540248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.540340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.540369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.540464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.540497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.540583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.540609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.540691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.540717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.540814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.540843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.540929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.540957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.541038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.541064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.541145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.541172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.541262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.541288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.541400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.541428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.541516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.541543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.541666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.541692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.541780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.541805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.541893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.541919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.542028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.542053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.542139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.542164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.542250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.542277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.542369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.542398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.542512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.542558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.542646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.542672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.542756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.781 [2024-11-18 08:09:57.542782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.781 qpair failed and we were unable to recover it. 00:36:04.781 [2024-11-18 08:09:57.542884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.542910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.542993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.543020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.543097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.543122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.543206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.543231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.543306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.543331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.543451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.543476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.543566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.543592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.543681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.543712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.543789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.543814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.543899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.543926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.544020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.544046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.544142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.544171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.544259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.544287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.544373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.544399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.544485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.544516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.544602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.544628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.544714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.544739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.544823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.544848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.544965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.544990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.545076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.545104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.545190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.545218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.545334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.545361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.545446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.545473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.545564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.545596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.545677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.545705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.545797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.545823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.545914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.545942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.546035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.546063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.546141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.546167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.546286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.546312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.546397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.546423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.546512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.546539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.546621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.546647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.546760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.546787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.546863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.546889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.546969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.546995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.547100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.547140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.782 [2024-11-18 08:09:57.547245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.782 [2024-11-18 08:09:57.547273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.782 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.547387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.547414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.547509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.547536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.547623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.547649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.547732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.547759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.547875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.547903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.547995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.548022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.548113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.548141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.548226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.548252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.548369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.548395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.548474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.548513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.548595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.548622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.548706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.548733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.548815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.548843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.548972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.548999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.549087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.549115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.549231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.549258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.549344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.549371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.549463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.549497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.549583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.549609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.549736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.549762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.549856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.549884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.549997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.550025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.550109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.550136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.550251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.550278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.550352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.550378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.550455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.550502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.550604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.550631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.550720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.550747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.550871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.550902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.550996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.551024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.551111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.551137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.551220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.551246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-11-18 08:09:57.551341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-11-18 08:09:57.551382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.551505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.551533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.551627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.551653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.551730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.551756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.551848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.551876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.551968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.551994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.552072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.552100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.552188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.552217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.552307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.552337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.552445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.552472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.552563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.552589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.552672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.552699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.552786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.552814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.552931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.552958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.553035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.553064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.553157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.553185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.553277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.553304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.553386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.553412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.553508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.553534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.553619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.553647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.553765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.553801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.553889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.553917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.554008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.554034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.554119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.554146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.554276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.554316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.554440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.554468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.554565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.554594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.554690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.554717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.554800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.554826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.554913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.554940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.555026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.555054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.555143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.555171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.555264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.555291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.555402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.555428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.555522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.555548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.555632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.555658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.555735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.555761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.555838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.555865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-11-18 08:09:57.555954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-11-18 08:09:57.555981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.556065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.556091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.556204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.556230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.556314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.556340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.556424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.556452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.556536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.556563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.556648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.556676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.556765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.556792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.556882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.556910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.557006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.557035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.557121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.557148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.557232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.557257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.557332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.557358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.557439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.557465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.557592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.557620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.557711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.557738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.557818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.557845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.557926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.557951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.558033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.558059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.558185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.558211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.558325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.558351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.558428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.558453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.558534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.558566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.558650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.558675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.558762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.558790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.558876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.558903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.558989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.559016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.559093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.559119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.559209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.559249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.559335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.559362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.559460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.559499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.559586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.559613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.559721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.559747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.559829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.559854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.559938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.559966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.560047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-11-18 08:09:57.560072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-11-18 08:09:57.560161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.560187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.560267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.560292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.560401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.560426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.560513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.560539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.560652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.560678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.560766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.560792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.560870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.560896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.561006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.561032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.561154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.561185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.561272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.561299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.561388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.561414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.561495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.561522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.561600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.561626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.561716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.561753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.561835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.561861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.561942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.561968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.562049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.562074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.562154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.562182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.562270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.562298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.562384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.562412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.562527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.562554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.562669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.562695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.562783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.562809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.562896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.562924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.563010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.563038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.563153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.563178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.563265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.563290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.563376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.563403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.563499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.563527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.563641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.563667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.563745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.563772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.563891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.563917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.564024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.564049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.564139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.564167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.564249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-11-18 08:09:57.564276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-11-18 08:09:57.564366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.564405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.564500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.564528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.564637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.564663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.564749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.564776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.564861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.564887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.564971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.564998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.565085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.565111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.565201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.565229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.565310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.565336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.565466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.565499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.565590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.565616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.565695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.565721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.565845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.565871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.565984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.566010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.566099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.566125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.566206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.566231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.566362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.566401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.566526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.566554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.566640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.566674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.566765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.566791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.566881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.566907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.566992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.567019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.567099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.567125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.567217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.567247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.567332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.567359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.567441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.567467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.567565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.567593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.567706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.567732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.567818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.567844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.567920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.567947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.568060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.568086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.568189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.568229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.568319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.568346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.568434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.568460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.568546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.568572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.568658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.568684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.568769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-11-18 08:09:57.568795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-11-18 08:09:57.568873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.568899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.568986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.569016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.569147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.569186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.569281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.569308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.569444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.569470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.569556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.569583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.569677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.569705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.569787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.569813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.569930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.569958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.570067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.570093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.570177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.570205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.570285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.570311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.570398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.570425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.570530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.570558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.570644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.570671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.570781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.570808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.570921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.570947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.571038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.571067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.571160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.571187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.571268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.571295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.571377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.571402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.571504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.571533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.571655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.571681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.571764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.571791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.571876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.571902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.571988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.572016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.572100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.572126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.572208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.572236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.572321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.572347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.572460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.572485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.572573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.572599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.572686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.572713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.572829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.572855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-11-18 08:09:57.572935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-11-18 08:09:57.572962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.573040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.573066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.573154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.573182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.573262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.573288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.573372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.573399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.573484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.573521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.573619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.573646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.573727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.573753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.573841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.573867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.573960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.573988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.574079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.574108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.574210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.574236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.574325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.574350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.574425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.574450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.574552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.574578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.574660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.574692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.574772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.574798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.574877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.574903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.575036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.575061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.575151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.575176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.575270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.575298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.575383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.575410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.575509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.575538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.575649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.575675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.575759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.575785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.575862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.575889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.576008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.576034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.576118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.576144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.576230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.576258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.576340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.576366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.576442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.576469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.576591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.576616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.576702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.576727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.576809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.576835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.576935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.576960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.577047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.577072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.577158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.577183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.577265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.577293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-11-18 08:09:57.577412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-11-18 08:09:57.577440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.577556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.577584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.577777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.577803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.577919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.577945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.578030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.578062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.578159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.578185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.578297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.578323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.578440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.578466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.578673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.578700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.578787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.578813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.578928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.578955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.579035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.579062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.579173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.579199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.579300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.579338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.579441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.579469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.579570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.579605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.579691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.579721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.579814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.579841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.579926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.579953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.580046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.580076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.580170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.580196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.580311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.580338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.580451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.580477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.580578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.580604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.580687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.580714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.580802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.580827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.580921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.580946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.581038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.581064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.581145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.581171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.581263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.581287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.581390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.581424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.581532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.581560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.581648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.581674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.581773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.581800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.581902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.581929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.582080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.582127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-11-18 08:09:57.582232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-11-18 08:09:57.582259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.582359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.582387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.582512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.582545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.582645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.582672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.582768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.582796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.582883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.582913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.583026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.583053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.583148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.583174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.583265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.583297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.583388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.583416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.583506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.583533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.583619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.583647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.583730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.583757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.583845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.583871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.583979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.584011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.584124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.584150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.584238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.584265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.584355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.584382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.584502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.584529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.584622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.584648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.584733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.584758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.584867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.584893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.584979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.585005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.585090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.585116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.585209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.585234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.585317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.585345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.585438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.585468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.585590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.585616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.585703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.585730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.585808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.585835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.585916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.585942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.586024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.586050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.586129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.586159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.586259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.586286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.586379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.586407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.586532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.586567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-11-18 08:09:57.586654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-11-18 08:09:57.586680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.586775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.586801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.586897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.586924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.587002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.587027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.587114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.587140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.587263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.587292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.587383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.587409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.587533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.587560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.587650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.587675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.587761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.587788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.587884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.587910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.587998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.588023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.588102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.588132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.588248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.588279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.588362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.588394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.588530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.588557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.588639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.588665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.588757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.588790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.588900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.588927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.589019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.589053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.589181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.589209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.589293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.589319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.589415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.589441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.589573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.589599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.589692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.589718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.589815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.589841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.589957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.589982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.590076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.590102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.590230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.590269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.590397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.590425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.590545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.590573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.590654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.590688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.590790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.590816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-11-18 08:09:57.590896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-11-18 08:09:57.590922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.591011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.591038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.591153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.591179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.591285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.591333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.591443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.591470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.591573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.591599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.591693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.591721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.591824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.591850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.591931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.591956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.592055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.592087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.592202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.592231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.592328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.592359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.592449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.592476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.592573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.592605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.592710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.592747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.592871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.592908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.593008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.593035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.593152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.593178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.593295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.593321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.593411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.593441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.593547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.593575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.593660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.593686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.593770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.593798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.593918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.593944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.594027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.594059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.594157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.594183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.594269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.594298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.594418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.594443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.594560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.594586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.594665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.594690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.594781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.594811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.594900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.594925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.595012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.595038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.595165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.595191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.595271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.595297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.595384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.595409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.595498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-11-18 08:09:57.595526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-11-18 08:09:57.595602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.595628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.595717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.595743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.595828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.595853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.595936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.595961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.596078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.596107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.596191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.596218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.596322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.596355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.596446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.596479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.596584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.596623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.596743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.596789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.596885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.596912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.597001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.597027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.597108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.597134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.597245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.597272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.597368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.597396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.597519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.597549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.597637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.597665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.597739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.597766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.597851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.597877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.597973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.597999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.598111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.598138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.598233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.598262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.598355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.598381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.598513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.598544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.598638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.598665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.598748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.598774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.598873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.598900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.598983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.599009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.599094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.599120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.599211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.599236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.599349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.599374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.599450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.599485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.599586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.599611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.599691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.599716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.599809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.599835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.599922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.599948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.600039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-11-18 08:09:57.600067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-11-18 08:09:57.600159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.600186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.600267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.600293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.600379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.600405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.600510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.600539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.600621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.600646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.600741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.600767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.600870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.600903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.600990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.601017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.601096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.601123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.601210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.601236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.601314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.601340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.601436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.601461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.601551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.601582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.601671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.601697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.601793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.601819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.601900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.601927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.602012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.602038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.602111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.602137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.602225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.602254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.602330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.602357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.602444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.602470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.602564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.602592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.602690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.602716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.602808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.602835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.602916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.602941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.603041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.603083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.603227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.603255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.603339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.603366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.603451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.603478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.603609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.603645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.603759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.603804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.603900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.603927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.604012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.604037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.604149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.604178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.604262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.604287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-11-18 08:09:57.604362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-11-18 08:09:57.604389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.604482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.604516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.604629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.604655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.604740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.604767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.604893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.604920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.605014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.605051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.605140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.605168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.605246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.605274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.605368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.605394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.605475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.605507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.605593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.605619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.605714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.605741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.605833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.605859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.605944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.605971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.606058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.606085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.606170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.606196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.606284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.606312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.606433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.606466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.606567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.606595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.606710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.606738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.606833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.606859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.606970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.607001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.607124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.607152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.607239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.607265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.607349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.607375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.607452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.607480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.607575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.607602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.607683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.607711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.607863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.607889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.607970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.607996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.608076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.608104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.608256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.608282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.608367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.608393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.608477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.608511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.608618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-11-18 08:09:57.608644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-11-18 08:09:57.608842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.608870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.608959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.608985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.609062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.609088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.609169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.609194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.609278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.609303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.609395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.609429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.609523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.609551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.609635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.609662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.609747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.609774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.609866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.609893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.609975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.610001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.610125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.610151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.610265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.610291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.610374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.610401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.610501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.610529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.610613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.610640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.610720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.610745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.610822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.610847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.610935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.610960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.611072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.611097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.611188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.611217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.611304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.611330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.611407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.611438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.611515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.611542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.611628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.611654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.611731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.611757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.611846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.611873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.611961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.611986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.612069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.612096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.612175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.612205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.612286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.612311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.612427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.612452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.612547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.612574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-11-18 08:09:57.612653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-11-18 08:09:57.612679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.612797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.612823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.612904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.612931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.613020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.613046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.613158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.613183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.613295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.613326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.613428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.613454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.613544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.613571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.613656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.613683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.613777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.613803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.613879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.613905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.614022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.614049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.614128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.614157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.614250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.614276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.614358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.614391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.614520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.614547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.614672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.614700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.614791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.614817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.614891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.614917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.614994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.615020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.615114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.615141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.615246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.615272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.615354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.615386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.615478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.615513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.615598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.615625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.615707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.615734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.615820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.615847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.615931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.615958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.616043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.616069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.616186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.616218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.616309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.616337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.616423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.616449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.616540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.616567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.616680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.616705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.616787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.616813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.616892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.616924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.617008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.617033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-11-18 08:09:57.617112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-11-18 08:09:57.617139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.617226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.617251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.617336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.617365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.617458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.617504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.617639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.617668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.617751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.617777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.617869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.617896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.618011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.618037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.618158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.618185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.618264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.618291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.618373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.618399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.618524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.618550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.618630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.618656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.618768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.618795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.618877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.618905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.619016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.619042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.619123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.619149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.619241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.619269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.619356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.619382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.619510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.619552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.619655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.619683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.619766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.619792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.619871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.619898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.619984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.620010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.620099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.620125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.620211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.620239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.620328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.620353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.620432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.620458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.620544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.620570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.620646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.620672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.620752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.620778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.620865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.620893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.620985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.621018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.621110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.621137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.621220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.621247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.621326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.621353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.621436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.621464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.621558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-11-18 08:09:57.621585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-11-18 08:09:57.621696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.621721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.621799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.621825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.621906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.621932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.622010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.622036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.622112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.622137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.622243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.622269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.622349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.622374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.622467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.622507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.622603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.622629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.622709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.622736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.622865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.622892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.622986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.623014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.623132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.623159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.623266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.623293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.623373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.623399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.623494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.623521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.623607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.623634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.623717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.623743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.623849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.623876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.623957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.623985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.624082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.624108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.624212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.624258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.624361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.624389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.624469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.624504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.624604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.624631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.624710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.624735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.624816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.624842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.624919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.624945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.625032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.625066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.625161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.625189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.625273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.625300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.625388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.625414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.625512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.625539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.625623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.625651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.625728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.625759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-11-18 08:09:57.625850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-11-18 08:09:57.625876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.625963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.625989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.626068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.626094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.626174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.626200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.626280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.626306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.626391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.626419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.626518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.626544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.626624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.626651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.626763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.626789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.626870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.626896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.626977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.627003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.627107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.627133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.627245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.627270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.627363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.627390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.627474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.627507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.627586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.627612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.627690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.627718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.627800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.627826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.627909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.627935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.628018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.628046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.628131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.628157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.628231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.628258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.628337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.628364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.628440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.628466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.628561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.628589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.628672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.628699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.628777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.628805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.628893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.628918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.628998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.629024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.629103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.629128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.629213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.629246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.629347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.629376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.629459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.629485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.629591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.629617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.629708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.629734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.629820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.629846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.629968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.629996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.630071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-11-18 08:09:57.630096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-11-18 08:09:57.630175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.630201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.630311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.630337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.630428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.630459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.630572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.630609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.630711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.630738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.630848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.630874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.630955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.630983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.631065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.631093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.631182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.631209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.631295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.631322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.631435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.631461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.631548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.631574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.631652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.631678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.631759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.631786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.631872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.631899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.631984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.632011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.632127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.632154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.632296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.632336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.632431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.632461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.632562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.632588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.632670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.632696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.632785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.632812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.632894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.632920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.633002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.633030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.633123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.633154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.633268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.633295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.633374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.633401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.633518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.633544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.633625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.633657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.633767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.633792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.633870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.633897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.633995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.634022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.634220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-11-18 08:09:57.634245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-11-18 08:09:57.634327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.634353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.634426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.634452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.634551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.634579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.634681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.634709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.634793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.634820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.634907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.634938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.635025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.635052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.635166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.635191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.635270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:04.803 [2024-11-18 08:09:57.635295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:04.803 [2024-11-18 08:09:57.635418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.635446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.635542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.635570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.803 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.635657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.635683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:04.803 [2024-11-18 08:09:57.635770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.635796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.635879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.803 [2024-11-18 08:09:57.635904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.635987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.636013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.636130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.636159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.636240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.636267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.636352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.636380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.636502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.636530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.636626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.636654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.636749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.636775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.636866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.636898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.636996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.637022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.637132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.637157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.637283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.637321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.637399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.637425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.637508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.637535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.637626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.637653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.637731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.637758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.637871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.637898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.637977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.638003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.638083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.638108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.638226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.638256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.638373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.638402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.638484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.638522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-11-18 08:09:57.638621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-11-18 08:09:57.638648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.638734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.638759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.638872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.638899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.638986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.639012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.639102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.639130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.639216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.639243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.639330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.639358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.639472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.639509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.639623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.639649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.639727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.639753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.639841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.639868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.639979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.640005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.640095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.640121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.640207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.640235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.640332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.640359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.640485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.640519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.640601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.640629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.640712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.640739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.640820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.640846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.640924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.640950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.641034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.641061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.641145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.641172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.641296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.641322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.641398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.641425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.641519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.641546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.641664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.641691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.641778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.641804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.641885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.641913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.642035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.642063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.642154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.642180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.642260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.642286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.642365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.642390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.642468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.642512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.642596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.642622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.642703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.642730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.642820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.642847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.642932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.642961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.643055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.643084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-11-18 08:09:57.643164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-11-18 08:09:57.643195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.643271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.643297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.643416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.643442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.643525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.643551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.643640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.643668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.643753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.643779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.643864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.643891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.643981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.644006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.644084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.644109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.644200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.644225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.644307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.644332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.644419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.644445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.644553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.644580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.644689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.644716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.644807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.644833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.644949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.644975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.645057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.645082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.645177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.645203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.645295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.645320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.645402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.645428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.645549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.645576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.645687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.645713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.645792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.645817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.645892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.645917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.645997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.646024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.646118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.646154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.646233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.646260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.646349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.646375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.646455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.646482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.646571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.646596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.646674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.646700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.646843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.646874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.646948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.646974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.647067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.647093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.647175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.647200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.647282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.647307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.647386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.647412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.647533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.647562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.647653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.647681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.647792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.647821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.647934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.647965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.805 qpair failed and we were unable to recover it. 00:36:04.805 [2024-11-18 08:09:57.648053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.805 [2024-11-18 08:09:57.648080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.648162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.648189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.648297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.648327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.648415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.648441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.648518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.648545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.648629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.648655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.648735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.648762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.648879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.648906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.648986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.649017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.649096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.649135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.649215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.649242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.649333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.649360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.649469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.649500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.649591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.649618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.649728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.649754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.649863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.649889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.649975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.650001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.650089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.650117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.650212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.650239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.650326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.650352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.650445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.650472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.650576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.650603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.650688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.650714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.650807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.650836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.650950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.650978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.651067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.651100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.651186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.651214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.651294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.651323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.651426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.651453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.651559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.651588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.651672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.651698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.651777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.651804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.651886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.651912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.652034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.652061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.652142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.652168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.652264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.652289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.652370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.652395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.806 qpair failed and we were unable to recover it. 00:36:04.806 [2024-11-18 08:09:57.652480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.806 [2024-11-18 08:09:57.652519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.652633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.652660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.652739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.652769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.652856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.652881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.652957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.652983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.653065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.653091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.653168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.653193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.653287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.653317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.653407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.653436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.653526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.653553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.653638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.653664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.653745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.653772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.653891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.653918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.654003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.654032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.654122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.654149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.654238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.654265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.654350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.654378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:04.807 [2024-11-18 08:09:57.654508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.654543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.654632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.654658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:04.807 [2024-11-18 08:09:57.654747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.654775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.654856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.807 [2024-11-18 08:09:57.654885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.807 [2024-11-18 08:09:57.654982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.655009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.655106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.655132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.655212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.655243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.655325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.655358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.655444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.655470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.655569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.655597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.655687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.655715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.655794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.655823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.655904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.655930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.656041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.656067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.656156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.656185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.656275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.656302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.656382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.656408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.656500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.656527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.656609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.656635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.656713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.656739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.656837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.656865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.656955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.656982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.807 qpair failed and we were unable to recover it. 00:36:04.807 [2024-11-18 08:09:57.657056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.807 [2024-11-18 08:09:57.657082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.657193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.657224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.657323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.657350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.657449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.657476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.657563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.657590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.657671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.657696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.657781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.657817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.657933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.657959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.658039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.658065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.658152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.658178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.658266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.658294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.658378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.658407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.658503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.658536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.658633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.658660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.658780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.658806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.658896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.658928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.659007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.659033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.659123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.659150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.659229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.659256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.659372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.659398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.659493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.659520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.659606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.659632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.659715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.659741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.659883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.659909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.659989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.660016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.660103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.660131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.660218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.660256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.660349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.660376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.660471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.660502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.660585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.660612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.660692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.660717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.660795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.660821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.660903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.660934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.661046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.661084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.661173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.661200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.661293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.661331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.661433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.661461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.661560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.661588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.661673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.661699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.661778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.661805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.661887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.661912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.662016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.662048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.808 qpair failed and we were unable to recover it. 00:36:04.808 [2024-11-18 08:09:57.662139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.808 [2024-11-18 08:09:57.662165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.662245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.662272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.662382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.662409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.662588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.662615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.662704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.662730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.662828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.662861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.662942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.662969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.663057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.663084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.663161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.663186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.663270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.663295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.663438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.663464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.663562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.663592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.663685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.663712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.663804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.663835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.663917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.663944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.664039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.664065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.664188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.664217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.664299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.664326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.664412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.664438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.664544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.664571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.664660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.664686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.664761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.664788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.664915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.664944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.665028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.665055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.665135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.665162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.665278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.665304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.665397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.665426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.665522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.665549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.665627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.665659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.665749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.665775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.665859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.665894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.665978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.666009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.666108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.666135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.666239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.666265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.666352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.666380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.666462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.666498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.666588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.666615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.666699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.666727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.666829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.666856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.666934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.666966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.667053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.667087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.667177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.667206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.667289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.667315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.667400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.667427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.809 qpair failed and we were unable to recover it. 00:36:04.809 [2024-11-18 08:09:57.667515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.809 [2024-11-18 08:09:57.667542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.667626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.667652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.667768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.667795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.667881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.667908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.668033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.668059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.668146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.668172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.668254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.668296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.668387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.668413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.668516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.668543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.668633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.668666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.668751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.668777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.668890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.668918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.669010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.669036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.669125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.669152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.669260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.669292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.669374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.669401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.669525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.669552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.669654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.669680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.669812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.669839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.669913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.669940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.670051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.670077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.670159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.670187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.670294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.670341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.670434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.670463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.670556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.670583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.670663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.670695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.670819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.670845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.670963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.670993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.671087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.671114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.671222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.671249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.671357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.671385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.671477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.671518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.671604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.671631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.671727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.671753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.671842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.671869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.671956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.671993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.672082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.672108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.672223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.672251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.672330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.672356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.672438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.672467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.810 qpair failed and we were unable to recover it. 00:36:04.810 [2024-11-18 08:09:57.672571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.810 [2024-11-18 08:09:57.672598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.672681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.672708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.672800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.672827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.672936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.672962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.673048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.673074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.673152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.673178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.673272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.673307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.673405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.673444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.673545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.673573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.673689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.673716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.673833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.673870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.673971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.673997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.674104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.674130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.674211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.674237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.674355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.674382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.674498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.674525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.674616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.674643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.674721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.674747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.674832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.674858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.674938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.674964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.675048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.675075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.675177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.675217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.675312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.675340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.675417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.675443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.675553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.675587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.675673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.675699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.675788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.675819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.675897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.675924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.676003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.676032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.676157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.676196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.676283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.676311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.676396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.676422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.676515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.676542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.676626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.676652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.676737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.676763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.676847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.676874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.676987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.677013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.677118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.677145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.677231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.677260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.677377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.677404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.677485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.677535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.677631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.677658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.677738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.677764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.677857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.811 [2024-11-18 08:09:57.677884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.811 qpair failed and we were unable to recover it. 00:36:04.811 [2024-11-18 08:09:57.677962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.677988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.678113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.678140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.678232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.678259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.678337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.678365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.678474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.678529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.678622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.678650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.678733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.678759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.678845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.678876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.678955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.678980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.679069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.679097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.679214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.679241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.679317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.679345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.679429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.679455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.679553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.679581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.679667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.679693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.679805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.679833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.679940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.679966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.680052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.680078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.680191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.680222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.680317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.680345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.680429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.680456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.680552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.680579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.680669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.680694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.680769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.680796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.680881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.680909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.680999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.681025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.681109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.681135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.681220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.681246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.681320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.681346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.681440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.681466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.681570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.681598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.681679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.681705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.681809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.681835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.681918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.681944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.682062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.682088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.682185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.682214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.682297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.682325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.682406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.682433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.682523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.682550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.682658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.682684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.682759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.682785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.682863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.682888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.682983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.683009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.812 [2024-11-18 08:09:57.683093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.812 [2024-11-18 08:09:57.683119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.812 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.683238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.683265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.683363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.683392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.683474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.683517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.683600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.683628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.683728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.683756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.683846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.683872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.683956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.683983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.684075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.684103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.684197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.684236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.684442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.684468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.684574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.684603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.684699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.684726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.684816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.684842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.684923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.684949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.685028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.685058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.685136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.685162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.685281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.685309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.685397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.685424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.685533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.685562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.685656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.685684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.685777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.685803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.685889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.685915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.685999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.686025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.686111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.686139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.686228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.686254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.686341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.686370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.686453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.686480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.686581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.686614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.686704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.686729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.686819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.686846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.686925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.686953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.687044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.687070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.687157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.687182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.687276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.687305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.687389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.687417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.687518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.687546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.687640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.687666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.687751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.687777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.687857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.687886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.687978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.688006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.688095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.688121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.688199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.688226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.688309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.688337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.688421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.688449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.688548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.688575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.688656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.688682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.688754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.813 [2024-11-18 08:09:57.688781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.813 qpair failed and we were unable to recover it. 00:36:04.813 [2024-11-18 08:09:57.688872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.688899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.688978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.689005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.689096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.689124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.689211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.689237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.689317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.689343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.689432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.689459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.689550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.689579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.689671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.689715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.689819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.689846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.689988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.690015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.690095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.690121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.690205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.690231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.690314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.690341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.690442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.690469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.690575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.690604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.690697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.690723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.690819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.690855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.690936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.690962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.691044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.691070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.691153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.691181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.691264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.691292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.691387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.691417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.691508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.691535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.691611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.691637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.691718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.691745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.691845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.691871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.691958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.691985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.692081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.692110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.692199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.692226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.692307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.692331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.692407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.692431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.692535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.692560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.692639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.692663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.692742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.692766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.692863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.692892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.692971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.692996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.693079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.693105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.693198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.693225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.693317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.693343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.693424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.693449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.693534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.693560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.693644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.693669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.693759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.693786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.814 qpair failed and we were unable to recover it. 00:36:04.814 [2024-11-18 08:09:57.693860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.814 [2024-11-18 08:09:57.693885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.693986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.694011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.694089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.694113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.694201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.694225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.694319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.694344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.694437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.694463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.694565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.694593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.694678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.694704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.694808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.694833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.694912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.694937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.695014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.695040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.695129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.695154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.695256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.695293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.695504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.695532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.695626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.695652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 Malloc0 00:36:04.815 [2024-11-18 08:09:57.695732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.695757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.695949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.695973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.696056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.815 [2024-11-18 08:09:57.696081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.696166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.696190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:04.815 [2024-11-18 08:09:57.696279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.696303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.815 [2024-11-18 08:09:57.696398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.696428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.815 [2024-11-18 08:09:57.696533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.696561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.696656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.696685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.696777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.696804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.696895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.696922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.697010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.697036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.697129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.697162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.697245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.697271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.697344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.697374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.697465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.697500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.697596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.697623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.697703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.697733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.697823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.697852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.697944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.697972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.698061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.698087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.698168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.698194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.698285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.698310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.698392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.698417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.698507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.698532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.698619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.698645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.698756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.698782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.698875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.698900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.698989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.699018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.699111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.699140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.699220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.699246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.699326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.699353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.699432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.699458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.815 qpair failed and we were unable to recover it. 00:36:04.815 [2024-11-18 08:09:57.699536] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:04.815 [2024-11-18 08:09:57.699551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.815 [2024-11-18 08:09:57.699578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.699671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.699696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.699785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.699810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.699887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.699913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.699998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.700027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.700117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.700144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.700236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.700262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.700347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.700375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.700462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.700496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.700585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.700617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.700697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.700723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.700822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.700849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.700932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.700957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.701157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.701184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.701266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.701295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.701407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.701433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.701538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.701565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.701655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.701683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.701764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.701800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.701913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.701940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.702026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.702052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.702135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.702161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.702356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.702383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.702552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.702580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.702661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.702687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.702776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.702804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.702879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.702905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.702978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.703003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.703118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.703143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.703247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.703286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.703375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.703404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.703517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.703545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.703645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.703672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.703753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.703786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.703870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.703901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.703987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.704016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.704109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.704139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.704232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.704260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.704352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.704379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.704472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.704505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.704594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.704620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.704705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.704732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.704824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.704850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.704941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.704968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.705050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.705078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.705161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.705187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.705287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.705326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.705414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.705443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.705535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.705565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.705658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.705689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.816 [2024-11-18 08:09:57.705775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.816 [2024-11-18 08:09:57.705801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.816 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.705882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.705910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.705996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.706021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.706118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.706146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.706234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.706267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.706354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.706382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.706468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.706506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.706591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.706619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.706706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.706732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.706816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.706843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.706932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.706959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.707058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.707084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.707167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.707194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.707295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.707335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.707433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.707461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.707568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.707595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.707678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.707703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.707800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.707825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.707907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.707934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.708037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.817 [2024-11-18 08:09:57.708063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.708147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.708176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.708263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:04.817 [2024-11-18 08:09:57.708293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.708377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.708402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.817 [2024-11-18 08:09:57.708506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.708532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.708614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.708639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.817 [2024-11-18 08:09:57.708735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.708760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.710006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.710041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.710133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.710162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.710257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.710285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.710364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.710390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.710522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.710561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.710682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.710709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.710809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.710835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.710926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.710951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.711038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.711064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.711142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.711167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.711246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.711271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.711346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.711372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.711472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.711519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.711608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.711635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.711717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.711746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.711839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.711867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.711987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.712017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.712109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.712135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.712223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.712249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.712342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.712369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.712451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.712477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.712598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.712624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.712715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.712741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.817 qpair failed and we were unable to recover it. 00:36:04.817 [2024-11-18 08:09:57.712836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.817 [2024-11-18 08:09:57.712861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.712966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.713001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.713097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.713124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.713237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.713263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.713352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.713378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.713469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.713509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.713598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.713624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.713724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.713750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.713862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.713887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.713987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.714036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.714135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.714164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.714245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.714270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.714345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.714371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.714453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.714480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.714620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.714648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.714731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.714762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.714844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.714870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.714945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.714971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.715051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.715079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.715188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.715214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.715298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.715326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.715425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.715452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.715569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.715596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.715684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.715709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.715794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.715819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.818 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.715908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.715932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.716021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:04.818 [2024-11-18 08:09:57.716048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 [2024-11-18 08:09:57.716127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.818 [2024-11-18 08:09:57.716152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.818 qpair failed and we were unable to recover it. 00:36:04.818 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.818 [2024-11-18 08:09:57.716259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.716291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.716383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.716410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.819 [2024-11-18 08:09:57.716506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.716535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.716661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.716688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.716768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.716795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.716890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.716915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.717008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.717035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.717125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.717152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.717230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.717256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.717340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.717366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.717454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.717480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.717575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.717600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.717692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.717728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.717827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.717854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.717935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.717961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.718046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.718071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.718147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.718174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.718263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.718292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.718412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.718440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.718549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.718577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.718673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.718700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.718783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.718814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.718899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.718925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.719010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.719037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.719130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.719160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.719245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.719281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.719379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.719412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.719512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.719539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.719643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.719671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.719764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.719790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.719879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.719905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.819 [2024-11-18 08:09:57.719988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.819 [2024-11-18 08:09:57.720016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.819 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.720100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.720127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.720247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.720273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.720361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.720389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.720485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.720524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.720611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.720637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.720738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.720765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.720872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.720899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.720989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.721016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.721101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.721128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.721249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.721278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.721375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.721414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.721513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.721541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.721626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.721652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.721741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.721767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.721846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.721871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.721952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.721977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.722082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.722111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.722233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.722260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.722346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.722373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.722452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.722478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.722566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.722596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.722689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.722715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.722803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.722830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.722913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.722938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.723024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.723052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.723134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.723163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.723264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.723290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.723378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.723405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.723512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.723538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.723630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.723656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.723746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.723770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.723874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.723899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.820 [2024-11-18 08:09:57.723983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.820 [2024-11-18 08:09:57.724007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.820 qpair failed and we were unable to recover it. 00:36:04.820 [2024-11-18 08:09:57.724089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.724114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:04.821 [2024-11-18 08:09:57.724203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.724228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.724302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.724326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.821 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.724419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.724445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.821 [2024-11-18 08:09:57.724539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.724566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.724647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.724672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.724752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.724778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.724862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.724889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.724972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.724997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.725072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.725099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.725210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.725236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.725322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.725347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.725428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.725458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f690 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.725568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.725607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b00000b90 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.725698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.725727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.725831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.725858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.725941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.725967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.726050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.726086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.726166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.726193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.726385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.726411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.726511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.726538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.726631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.726658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.726748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.726774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.726857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.726883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af4000b90 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.726968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.726997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.727083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.727111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.727230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.821 [2024-11-18 08:09:57.727257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.821 qpair failed and we were unable to recover it. 00:36:04.821 [2024-11-18 08:09:57.727338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.822 [2024-11-18 08:09:57.727371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.822 qpair failed and we were unable to recover it. 00:36:04.822 [2024-11-18 08:09:57.727460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.822 [2024-11-18 08:09:57.727486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.822 qpair failed and we were unable to recover it. 00:36:04.822 [2024-11-18 08:09:57.727582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.822 [2024-11-18 08:09:57.727610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7af8000b90 with addr=10.0.0.2, port=4420 00:36:04.822 qpair failed and we were unable to recover it. 00:36:04.822 [2024-11-18 08:09:57.728104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:04.822 [2024-11-18 08:09:57.730457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.822 [2024-11-18 08:09:57.730647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.822 [2024-11-18 08:09:57.730678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.822 [2024-11-18 08:09:57.730714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.822 [2024-11-18 08:09:57.730732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:04.822 [2024-11-18 08:09:57.730791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:04.822 qpair failed and we were unable to recover it. 00:36:04.822 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.822 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:04.822 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.822 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.822 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.822 08:09:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 896242 00:36:04.822 [2024-11-18 08:09:57.740231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.822 [2024-11-18 08:09:57.740335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.822 [2024-11-18 08:09:57.740364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.822 [2024-11-18 08:09:57.740379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.822 [2024-11-18 08:09:57.740391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:04.822 [2024-11-18 08:09:57.740424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:04.822 qpair failed and we were unable to recover it. 00:36:04.822 [2024-11-18 08:09:57.750191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.822 [2024-11-18 08:09:57.750279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.822 [2024-11-18 08:09:57.750306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.822 [2024-11-18 08:09:57.750321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.822 [2024-11-18 08:09:57.750333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:04.822 [2024-11-18 08:09:57.750363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:04.822 qpair failed and we were unable to recover it. 00:36:04.822 [2024-11-18 08:09:57.760289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.822 [2024-11-18 08:09:57.760431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.822 [2024-11-18 08:09:57.760458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.822 [2024-11-18 08:09:57.760472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.822 [2024-11-18 08:09:57.760484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:04.822 [2024-11-18 08:09:57.760528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:04.822 qpair failed and we were unable to recover it. 00:36:04.822 [2024-11-18 08:09:57.770172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.822 [2024-11-18 08:09:57.770257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.822 [2024-11-18 08:09:57.770283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.822 [2024-11-18 08:09:57.770297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.822 [2024-11-18 08:09:57.770309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:04.822 [2024-11-18 08:09:57.770353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:04.822 qpair failed and we were unable to recover it. 00:36:04.822 [2024-11-18 08:09:57.780222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.822 [2024-11-18 08:09:57.780309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.822 [2024-11-18 08:09:57.780335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.822 [2024-11-18 08:09:57.780349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.822 [2024-11-18 08:09:57.780361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:04.822 [2024-11-18 08:09:57.780391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:04.822 qpair failed and we were unable to recover it. 00:36:04.822 [2024-11-18 08:09:57.790197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.822 [2024-11-18 08:09:57.790281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.822 [2024-11-18 08:09:57.790313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.822 [2024-11-18 08:09:57.790328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.822 [2024-11-18 08:09:57.790340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:04.822 [2024-11-18 08:09:57.790370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:04.822 qpair failed and we were unable to recover it. 00:36:04.822 [2024-11-18 08:09:57.800245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.822 [2024-11-18 08:09:57.800338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.822 [2024-11-18 08:09:57.800364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.822 [2024-11-18 08:09:57.800378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.822 [2024-11-18 08:09:57.800390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:04.822 [2024-11-18 08:09:57.800420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:04.822 qpair failed and we were unable to recover it. 00:36:04.823 [2024-11-18 08:09:57.810300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.823 [2024-11-18 08:09:57.810390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.823 [2024-11-18 08:09:57.810414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.823 [2024-11-18 08:09:57.810428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.823 [2024-11-18 08:09:57.810439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:04.823 [2024-11-18 08:09:57.810469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-11-18 08:09:57.820305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.823 [2024-11-18 08:09:57.820405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.823 [2024-11-18 08:09:57.820431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.823 [2024-11-18 08:09:57.820445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.823 [2024-11-18 08:09:57.820457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:04.823 [2024-11-18 08:09:57.820487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-11-18 08:09:57.830452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.823 [2024-11-18 08:09:57.830550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.823 [2024-11-18 08:09:57.830583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.823 [2024-11-18 08:09:57.830608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.823 [2024-11-18 08:09:57.830629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:04.823 [2024-11-18 08:09:57.830676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:04.823 qpair failed and we were unable to recover it. 00:36:05.084 [2024-11-18 08:09:57.840447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.084 [2024-11-18 08:09:57.840560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.084 [2024-11-18 08:09:57.840588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.084 [2024-11-18 08:09:57.840602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.084 [2024-11-18 08:09:57.840614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.084 [2024-11-18 08:09:57.840645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-11-18 08:09:57.850403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.084 [2024-11-18 08:09:57.850530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.084 [2024-11-18 08:09:57.850561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.084 [2024-11-18 08:09:57.850576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.084 [2024-11-18 08:09:57.850588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.084 [2024-11-18 08:09:57.850619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-11-18 08:09:57.860414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.084 [2024-11-18 08:09:57.860510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.084 [2024-11-18 08:09:57.860536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.084 [2024-11-18 08:09:57.860549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.084 [2024-11-18 08:09:57.860561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.084 [2024-11-18 08:09:57.860591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-11-18 08:09:57.870423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.084 [2024-11-18 08:09:57.870513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.084 [2024-11-18 08:09:57.870537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.084 [2024-11-18 08:09:57.870551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.084 [2024-11-18 08:09:57.870563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.084 [2024-11-18 08:09:57.870593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-11-18 08:09:57.880470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.084 [2024-11-18 08:09:57.880575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.084 [2024-11-18 08:09:57.880604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.084 [2024-11-18 08:09:57.880619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.084 [2024-11-18 08:09:57.880631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.084 [2024-11-18 08:09:57.880660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-11-18 08:09:57.890505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.084 [2024-11-18 08:09:57.890591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.084 [2024-11-18 08:09:57.890617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.084 [2024-11-18 08:09:57.890631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.084 [2024-11-18 08:09:57.890642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.084 [2024-11-18 08:09:57.890672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-11-18 08:09:57.900513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.084 [2024-11-18 08:09:57.900602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.084 [2024-11-18 08:09:57.900627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.084 [2024-11-18 08:09:57.900641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.084 [2024-11-18 08:09:57.900652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.084 [2024-11-18 08:09:57.900682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-11-18 08:09:57.910548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.084 [2024-11-18 08:09:57.910633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.084 [2024-11-18 08:09:57.910657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.084 [2024-11-18 08:09:57.910671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.084 [2024-11-18 08:09:57.910683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.084 [2024-11-18 08:09:57.910713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-11-18 08:09:57.920600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.084 [2024-11-18 08:09:57.920702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.084 [2024-11-18 08:09:57.920734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.084 [2024-11-18 08:09:57.920749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.084 [2024-11-18 08:09:57.920760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.084 [2024-11-18 08:09:57.920791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-11-18 08:09:57.930646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.084 [2024-11-18 08:09:57.930731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.084 [2024-11-18 08:09:57.930755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.084 [2024-11-18 08:09:57.930768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.084 [2024-11-18 08:09:57.930780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.084 [2024-11-18 08:09:57.930810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-11-18 08:09:57.940683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.084 [2024-11-18 08:09:57.940773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.084 [2024-11-18 08:09:57.940803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.085 [2024-11-18 08:09:57.940818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.085 [2024-11-18 08:09:57.940830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.085 [2024-11-18 08:09:57.940873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-11-18 08:09:57.950677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.085 [2024-11-18 08:09:57.950802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.085 [2024-11-18 08:09:57.950828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.085 [2024-11-18 08:09:57.950842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.085 [2024-11-18 08:09:57.950854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.085 [2024-11-18 08:09:57.950884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-11-18 08:09:57.960694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.085 [2024-11-18 08:09:57.960783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.085 [2024-11-18 08:09:57.960808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.085 [2024-11-18 08:09:57.960827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.085 [2024-11-18 08:09:57.960839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.085 [2024-11-18 08:09:57.960869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-11-18 08:09:57.970721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.085 [2024-11-18 08:09:57.970817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.085 [2024-11-18 08:09:57.970844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.085 [2024-11-18 08:09:57.970858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.085 [2024-11-18 08:09:57.970869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.085 [2024-11-18 08:09:57.970899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-11-18 08:09:57.980757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.085 [2024-11-18 08:09:57.980871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.085 [2024-11-18 08:09:57.980897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.085 [2024-11-18 08:09:57.980911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.085 [2024-11-18 08:09:57.980923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.085 [2024-11-18 08:09:57.980953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-11-18 08:09:57.990855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.085 [2024-11-18 08:09:57.990943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.085 [2024-11-18 08:09:57.990969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.085 [2024-11-18 08:09:57.990983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.085 [2024-11-18 08:09:57.990994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.085 [2024-11-18 08:09:57.991024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-11-18 08:09:58.000859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.085 [2024-11-18 08:09:58.000963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.085 [2024-11-18 08:09:58.000991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.085 [2024-11-18 08:09:58.001005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.085 [2024-11-18 08:09:58.001017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.085 [2024-11-18 08:09:58.001056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-11-18 08:09:58.010874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.085 [2024-11-18 08:09:58.010989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.085 [2024-11-18 08:09:58.011016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.085 [2024-11-18 08:09:58.011030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.085 [2024-11-18 08:09:58.011042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.085 [2024-11-18 08:09:58.011072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-11-18 08:09:58.020879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.085 [2024-11-18 08:09:58.021000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.085 [2024-11-18 08:09:58.021025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.085 [2024-11-18 08:09:58.021039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.085 [2024-11-18 08:09:58.021052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.085 [2024-11-18 08:09:58.021082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-11-18 08:09:58.030888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.085 [2024-11-18 08:09:58.030977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.085 [2024-11-18 08:09:58.031006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.085 [2024-11-18 08:09:58.031019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.085 [2024-11-18 08:09:58.031031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.085 [2024-11-18 08:09:58.031061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-11-18 08:09:58.041000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.085 [2024-11-18 08:09:58.041101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.085 [2024-11-18 08:09:58.041128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.085 [2024-11-18 08:09:58.041142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.085 [2024-11-18 08:09:58.041154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.085 [2024-11-18 08:09:58.041185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-11-18 08:09:58.050981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.085 [2024-11-18 08:09:58.051077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.085 [2024-11-18 08:09:58.051107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.085 [2024-11-18 08:09:58.051123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.085 [2024-11-18 08:09:58.051135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.085 [2024-11-18 08:09:58.051166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-11-18 08:09:58.060996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.085 [2024-11-18 08:09:58.061083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.085 [2024-11-18 08:09:58.061111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.085 [2024-11-18 08:09:58.061127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.085 [2024-11-18 08:09:58.061138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.085 [2024-11-18 08:09:58.061169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-11-18 08:09:58.071026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.085 [2024-11-18 08:09:58.071150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.086 [2024-11-18 08:09:58.071177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.086 [2024-11-18 08:09:58.071191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.086 [2024-11-18 08:09:58.071203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.086 [2024-11-18 08:09:58.071233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-11-18 08:09:58.081045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.086 [2024-11-18 08:09:58.081140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.086 [2024-11-18 08:09:58.081172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.086 [2024-11-18 08:09:58.081197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.086 [2024-11-18 08:09:58.081213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.086 [2024-11-18 08:09:58.081245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-11-18 08:09:58.091101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.086 [2024-11-18 08:09:58.091204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.086 [2024-11-18 08:09:58.091233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.086 [2024-11-18 08:09:58.091254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.086 [2024-11-18 08:09:58.091267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.086 [2024-11-18 08:09:58.091299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-11-18 08:09:58.101159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.086 [2024-11-18 08:09:58.101280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.086 [2024-11-18 08:09:58.101306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.086 [2024-11-18 08:09:58.101320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.086 [2024-11-18 08:09:58.101332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.086 [2024-11-18 08:09:58.101363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-11-18 08:09:58.111101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.086 [2024-11-18 08:09:58.111182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.086 [2024-11-18 08:09:58.111207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.086 [2024-11-18 08:09:58.111220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.086 [2024-11-18 08:09:58.111232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.086 [2024-11-18 08:09:58.111262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-11-18 08:09:58.121151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.086 [2024-11-18 08:09:58.121246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.086 [2024-11-18 08:09:58.121271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.086 [2024-11-18 08:09:58.121284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.086 [2024-11-18 08:09:58.121296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.086 [2024-11-18 08:09:58.121325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-11-18 08:09:58.131189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.086 [2024-11-18 08:09:58.131277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.086 [2024-11-18 08:09:58.131305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.086 [2024-11-18 08:09:58.131320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.086 [2024-11-18 08:09:58.131331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.086 [2024-11-18 08:09:58.131367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-11-18 08:09:58.141277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.086 [2024-11-18 08:09:58.141362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.086 [2024-11-18 08:09:58.141388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.086 [2024-11-18 08:09:58.141406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.086 [2024-11-18 08:09:58.141419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.086 [2024-11-18 08:09:58.141448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-11-18 08:09:58.151237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.086 [2024-11-18 08:09:58.151325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.086 [2024-11-18 08:09:58.151350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.086 [2024-11-18 08:09:58.151363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.086 [2024-11-18 08:09:58.151375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.086 [2024-11-18 08:09:58.151405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-11-18 08:09:58.161294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.086 [2024-11-18 08:09:58.161399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.086 [2024-11-18 08:09:58.161425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.086 [2024-11-18 08:09:58.161439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.086 [2024-11-18 08:09:58.161451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.086 [2024-11-18 08:09:58.161481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.347 [2024-11-18 08:09:58.171305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.347 [2024-11-18 08:09:58.171410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.347 [2024-11-18 08:09:58.171437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.347 [2024-11-18 08:09:58.171451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.347 [2024-11-18 08:09:58.171462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.347 [2024-11-18 08:09:58.171509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.347 qpair failed and we were unable to recover it. 00:36:05.347 [2024-11-18 08:09:58.181442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.347 [2024-11-18 08:09:58.181591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.347 [2024-11-18 08:09:58.181618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.347 [2024-11-18 08:09:58.181632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.348 [2024-11-18 08:09:58.181643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.348 [2024-11-18 08:09:58.181674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.348 qpair failed and we were unable to recover it. 00:36:05.348 [2024-11-18 08:09:58.191352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.348 [2024-11-18 08:09:58.191439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.348 [2024-11-18 08:09:58.191465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.348 [2024-11-18 08:09:58.191478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.348 [2024-11-18 08:09:58.191497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.348 [2024-11-18 08:09:58.191530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.348 qpair failed and we were unable to recover it. 00:36:05.348 [2024-11-18 08:09:58.201414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.348 [2024-11-18 08:09:58.201519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.348 [2024-11-18 08:09:58.201546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.348 [2024-11-18 08:09:58.201560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.348 [2024-11-18 08:09:58.201572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.348 [2024-11-18 08:09:58.201602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.348 qpair failed and we were unable to recover it. 00:36:05.348 [2024-11-18 08:09:58.211517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.348 [2024-11-18 08:09:58.211641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.348 [2024-11-18 08:09:58.211666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.348 [2024-11-18 08:09:58.211680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.348 [2024-11-18 08:09:58.211691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.348 [2024-11-18 08:09:58.211735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.348 qpair failed and we were unable to recover it. 00:36:05.348 [2024-11-18 08:09:58.221537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.348 [2024-11-18 08:09:58.221618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.348 [2024-11-18 08:09:58.221648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.348 [2024-11-18 08:09:58.221662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.348 [2024-11-18 08:09:58.221674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.348 [2024-11-18 08:09:58.221718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.348 qpair failed and we were unable to recover it. 00:36:05.348 [2024-11-18 08:09:58.231477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.348 [2024-11-18 08:09:58.231590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.348 [2024-11-18 08:09:58.231616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.348 [2024-11-18 08:09:58.231629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.348 [2024-11-18 08:09:58.231642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.348 [2024-11-18 08:09:58.231672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.348 qpair failed and we were unable to recover it. 00:36:05.348 [2024-11-18 08:09:58.241507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.348 [2024-11-18 08:09:58.241612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.348 [2024-11-18 08:09:58.241638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.348 [2024-11-18 08:09:58.241652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.348 [2024-11-18 08:09:58.241663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.348 [2024-11-18 08:09:58.241694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.348 qpair failed and we were unable to recover it. 00:36:05.348 [2024-11-18 08:09:58.251571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.348 [2024-11-18 08:09:58.251702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.348 [2024-11-18 08:09:58.251732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.348 [2024-11-18 08:09:58.251748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.348 [2024-11-18 08:09:58.251760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.348 [2024-11-18 08:09:58.251804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.348 qpair failed and we were unable to recover it. 00:36:05.348 [2024-11-18 08:09:58.261575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.348 [2024-11-18 08:09:58.261696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.348 [2024-11-18 08:09:58.261721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.348 [2024-11-18 08:09:58.261734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.348 [2024-11-18 08:09:58.261751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.348 [2024-11-18 08:09:58.261782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.348 qpair failed and we were unable to recover it. 00:36:05.348 [2024-11-18 08:09:58.271681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.348 [2024-11-18 08:09:58.271761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.348 [2024-11-18 08:09:58.271787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.348 [2024-11-18 08:09:58.271801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.348 [2024-11-18 08:09:58.271813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.348 [2024-11-18 08:09:58.271844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.348 qpair failed and we were unable to recover it. 00:36:05.348 [2024-11-18 08:09:58.281656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.348 [2024-11-18 08:09:58.281744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.348 [2024-11-18 08:09:58.281768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.348 [2024-11-18 08:09:58.281781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.348 [2024-11-18 08:09:58.281793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.348 [2024-11-18 08:09:58.281823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.348 qpair failed and we were unable to recover it. 00:36:05.348 [2024-11-18 08:09:58.291675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.348 [2024-11-18 08:09:58.291763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.348 [2024-11-18 08:09:58.291787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.348 [2024-11-18 08:09:58.291800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.348 [2024-11-18 08:09:58.291812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.348 [2024-11-18 08:09:58.291841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.348 qpair failed and we were unable to recover it. 00:36:05.348 [2024-11-18 08:09:58.301678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.348 [2024-11-18 08:09:58.301775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.348 [2024-11-18 08:09:58.301801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.348 [2024-11-18 08:09:58.301815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.348 [2024-11-18 08:09:58.301827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.348 [2024-11-18 08:09:58.301856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.348 qpair failed and we were unable to recover it. 00:36:05.348 [2024-11-18 08:09:58.311707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.348 [2024-11-18 08:09:58.311801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.349 [2024-11-18 08:09:58.311827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.349 [2024-11-18 08:09:58.311840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.349 [2024-11-18 08:09:58.311852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.349 [2024-11-18 08:09:58.311881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.349 qpair failed and we were unable to recover it. 00:36:05.349 [2024-11-18 08:09:58.321744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.349 [2024-11-18 08:09:58.321848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.349 [2024-11-18 08:09:58.321874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.349 [2024-11-18 08:09:58.321888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.349 [2024-11-18 08:09:58.321899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.349 [2024-11-18 08:09:58.321929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.349 qpair failed and we were unable to recover it. 00:36:05.349 [2024-11-18 08:09:58.331757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.349 [2024-11-18 08:09:58.331848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.349 [2024-11-18 08:09:58.331879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.349 [2024-11-18 08:09:58.331904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.349 [2024-11-18 08:09:58.331927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.349 [2024-11-18 08:09:58.331962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.349 qpair failed and we were unable to recover it. 00:36:05.349 [2024-11-18 08:09:58.341801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.349 [2024-11-18 08:09:58.341928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.349 [2024-11-18 08:09:58.341955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.349 [2024-11-18 08:09:58.341969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.349 [2024-11-18 08:09:58.341981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.349 [2024-11-18 08:09:58.342012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.349 qpair failed and we were unable to recover it. 00:36:05.349 [2024-11-18 08:09:58.351800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.349 [2024-11-18 08:09:58.351881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.349 [2024-11-18 08:09:58.351911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.349 [2024-11-18 08:09:58.351926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.349 [2024-11-18 08:09:58.351938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.349 [2024-11-18 08:09:58.351968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.349 qpair failed and we were unable to recover it. 00:36:05.349 [2024-11-18 08:09:58.361864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.349 [2024-11-18 08:09:58.361954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.349 [2024-11-18 08:09:58.361982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.349 [2024-11-18 08:09:58.361996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.349 [2024-11-18 08:09:58.362008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.349 [2024-11-18 08:09:58.362038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.349 qpair failed and we were unable to recover it. 00:36:05.349 [2024-11-18 08:09:58.371874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.349 [2024-11-18 08:09:58.371954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.349 [2024-11-18 08:09:58.371978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.349 [2024-11-18 08:09:58.371992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.349 [2024-11-18 08:09:58.372004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.349 [2024-11-18 08:09:58.372033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.349 qpair failed and we were unable to recover it. 00:36:05.349 [2024-11-18 08:09:58.381901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.349 [2024-11-18 08:09:58.381982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.349 [2024-11-18 08:09:58.382006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.349 [2024-11-18 08:09:58.382020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.349 [2024-11-18 08:09:58.382032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.349 [2024-11-18 08:09:58.382061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.349 qpair failed and we were unable to recover it. 00:36:05.349 [2024-11-18 08:09:58.391944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.349 [2024-11-18 08:09:58.392026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.349 [2024-11-18 08:09:58.392050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.349 [2024-11-18 08:09:58.392063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.349 [2024-11-18 08:09:58.392082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.349 [2024-11-18 08:09:58.392113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.349 qpair failed and we were unable to recover it. 00:36:05.349 [2024-11-18 08:09:58.401996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.349 [2024-11-18 08:09:58.402087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.349 [2024-11-18 08:09:58.402113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.349 [2024-11-18 08:09:58.402126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.349 [2024-11-18 08:09:58.402138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.349 [2024-11-18 08:09:58.402168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.349 qpair failed and we were unable to recover it. 00:36:05.349 [2024-11-18 08:09:58.412069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.349 [2024-11-18 08:09:58.412154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.349 [2024-11-18 08:09:58.412178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.349 [2024-11-18 08:09:58.412192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.349 [2024-11-18 08:09:58.412204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.349 [2024-11-18 08:09:58.412247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.349 qpair failed and we were unable to recover it. 00:36:05.349 [2024-11-18 08:09:58.422044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.349 [2024-11-18 08:09:58.422130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.349 [2024-11-18 08:09:58.422155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.349 [2024-11-18 08:09:58.422169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.349 [2024-11-18 08:09:58.422180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.349 [2024-11-18 08:09:58.422210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.349 qpair failed and we were unable to recover it. 00:36:05.349 [2024-11-18 08:09:58.432050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.349 [2024-11-18 08:09:58.432134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.349 [2024-11-18 08:09:58.432160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.349 [2024-11-18 08:09:58.432174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.349 [2024-11-18 08:09:58.432186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.349 [2024-11-18 08:09:58.432222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.349 qpair failed and we were unable to recover it. 00:36:05.611 [2024-11-18 08:09:58.442107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.611 [2024-11-18 08:09:58.442244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.611 [2024-11-18 08:09:58.442270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.611 [2024-11-18 08:09:58.442284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.611 [2024-11-18 08:09:58.442296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.611 [2024-11-18 08:09:58.442326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.611 qpair failed and we were unable to recover it. 00:36:05.611 [2024-11-18 08:09:58.452138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.611 [2024-11-18 08:09:58.452224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.611 [2024-11-18 08:09:58.452248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.611 [2024-11-18 08:09:58.452262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.611 [2024-11-18 08:09:58.452274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.611 [2024-11-18 08:09:58.452304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.611 qpair failed and we were unable to recover it. 00:36:05.611 [2024-11-18 08:09:58.462241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.611 [2024-11-18 08:09:58.462346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.611 [2024-11-18 08:09:58.462372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.611 [2024-11-18 08:09:58.462386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.611 [2024-11-18 08:09:58.462397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.611 [2024-11-18 08:09:58.462426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.611 qpair failed and we were unable to recover it. 00:36:05.611 [2024-11-18 08:09:58.472181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.611 [2024-11-18 08:09:58.472263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.611 [2024-11-18 08:09:58.472289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.611 [2024-11-18 08:09:58.472303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.611 [2024-11-18 08:09:58.472314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.611 [2024-11-18 08:09:58.472344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.611 qpair failed and we were unable to recover it. 00:36:05.611 [2024-11-18 08:09:58.482309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.611 [2024-11-18 08:09:58.482408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.611 [2024-11-18 08:09:58.482440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.611 [2024-11-18 08:09:58.482454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.611 [2024-11-18 08:09:58.482466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.611 [2024-11-18 08:09:58.482504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.611 qpair failed and we were unable to recover it. 00:36:05.611 [2024-11-18 08:09:58.492244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.611 [2024-11-18 08:09:58.492361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.611 [2024-11-18 08:09:58.492388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.611 [2024-11-18 08:09:58.492402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.611 [2024-11-18 08:09:58.492414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.611 [2024-11-18 08:09:58.492443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.611 qpair failed and we were unable to recover it. 00:36:05.611 [2024-11-18 08:09:58.502285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.611 [2024-11-18 08:09:58.502403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.611 [2024-11-18 08:09:58.502429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.611 [2024-11-18 08:09:58.502443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.611 [2024-11-18 08:09:58.502455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.611 [2024-11-18 08:09:58.502485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.611 qpair failed and we were unable to recover it. 00:36:05.611 [2024-11-18 08:09:58.512381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.611 [2024-11-18 08:09:58.512469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.611 [2024-11-18 08:09:58.512502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.611 [2024-11-18 08:09:58.512519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.611 [2024-11-18 08:09:58.512531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.611 [2024-11-18 08:09:58.512560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.611 qpair failed and we were unable to recover it. 00:36:05.611 [2024-11-18 08:09:58.522467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.611 [2024-11-18 08:09:58.522582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.611 [2024-11-18 08:09:58.522607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.611 [2024-11-18 08:09:58.522626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.611 [2024-11-18 08:09:58.522638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.611 [2024-11-18 08:09:58.522668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.611 qpair failed and we were unable to recover it. 00:36:05.611 [2024-11-18 08:09:58.532447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.611 [2024-11-18 08:09:58.532553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.611 [2024-11-18 08:09:58.532579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.611 [2024-11-18 08:09:58.532593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.611 [2024-11-18 08:09:58.532605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.611 [2024-11-18 08:09:58.532649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.611 qpair failed and we were unable to recover it. 00:36:05.611 [2024-11-18 08:09:58.542441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.611 [2024-11-18 08:09:58.542551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.611 [2024-11-18 08:09:58.542576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.611 [2024-11-18 08:09:58.542590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.611 [2024-11-18 08:09:58.542601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.611 [2024-11-18 08:09:58.542631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.611 qpair failed and we were unable to recover it. 00:36:05.611 [2024-11-18 08:09:58.552461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.611 [2024-11-18 08:09:58.552555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.612 [2024-11-18 08:09:58.552579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.612 [2024-11-18 08:09:58.552593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.612 [2024-11-18 08:09:58.552604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.612 [2024-11-18 08:09:58.552634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.612 qpair failed and we were unable to recover it. 00:36:05.612 [2024-11-18 08:09:58.562458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.612 [2024-11-18 08:09:58.562569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.612 [2024-11-18 08:09:58.562599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.612 [2024-11-18 08:09:58.562615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.612 [2024-11-18 08:09:58.562627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.612 [2024-11-18 08:09:58.562663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.612 qpair failed and we were unable to recover it. 00:36:05.612 [2024-11-18 08:09:58.572483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.612 [2024-11-18 08:09:58.572576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.612 [2024-11-18 08:09:58.572602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.612 [2024-11-18 08:09:58.572616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.612 [2024-11-18 08:09:58.572627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.612 [2024-11-18 08:09:58.572657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.612 qpair failed and we were unable to recover it. 00:36:05.612 [2024-11-18 08:09:58.582503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.612 [2024-11-18 08:09:58.582615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.612 [2024-11-18 08:09:58.582647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.612 [2024-11-18 08:09:58.582667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.612 [2024-11-18 08:09:58.582680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.612 [2024-11-18 08:09:58.582711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.612 qpair failed and we were unable to recover it. 00:36:05.612 [2024-11-18 08:09:58.592536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.612 [2024-11-18 08:09:58.592626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.612 [2024-11-18 08:09:58.592655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.612 [2024-11-18 08:09:58.592670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.612 [2024-11-18 08:09:58.592682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.612 [2024-11-18 08:09:58.592713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.612 qpair failed and we were unable to recover it. 00:36:05.612 [2024-11-18 08:09:58.602648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.612 [2024-11-18 08:09:58.602742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.612 [2024-11-18 08:09:58.602769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.612 [2024-11-18 08:09:58.602783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.612 [2024-11-18 08:09:58.602795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.612 [2024-11-18 08:09:58.602825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.612 qpair failed and we were unable to recover it. 00:36:05.612 [2024-11-18 08:09:58.612582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.612 [2024-11-18 08:09:58.612673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.612 [2024-11-18 08:09:58.612698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.612 [2024-11-18 08:09:58.612711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.612 [2024-11-18 08:09:58.612723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.612 [2024-11-18 08:09:58.612753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.612 qpair failed and we were unable to recover it. 00:36:05.612 [2024-11-18 08:09:58.622604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.612 [2024-11-18 08:09:58.622686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.612 [2024-11-18 08:09:58.622710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.612 [2024-11-18 08:09:58.622723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.612 [2024-11-18 08:09:58.622735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.612 [2024-11-18 08:09:58.622764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.612 qpair failed and we were unable to recover it. 00:36:05.612 [2024-11-18 08:09:58.632645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.612 [2024-11-18 08:09:58.632729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.612 [2024-11-18 08:09:58.632753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.612 [2024-11-18 08:09:58.632767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.612 [2024-11-18 08:09:58.632778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.612 [2024-11-18 08:09:58.632808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.612 qpair failed and we were unable to recover it. 00:36:05.612 [2024-11-18 08:09:58.642681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.612 [2024-11-18 08:09:58.642774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.612 [2024-11-18 08:09:58.642800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.612 [2024-11-18 08:09:58.642813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.612 [2024-11-18 08:09:58.642825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.612 [2024-11-18 08:09:58.642855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.612 qpair failed and we were unable to recover it. 00:36:05.612 [2024-11-18 08:09:58.652804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.612 [2024-11-18 08:09:58.652904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.612 [2024-11-18 08:09:58.652930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.612 [2024-11-18 08:09:58.652950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.612 [2024-11-18 08:09:58.652962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.612 [2024-11-18 08:09:58.652992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.612 qpair failed and we were unable to recover it. 00:36:05.612 [2024-11-18 08:09:58.662765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.612 [2024-11-18 08:09:58.662890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.612 [2024-11-18 08:09:58.662916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.612 [2024-11-18 08:09:58.662930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.612 [2024-11-18 08:09:58.662942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.612 [2024-11-18 08:09:58.662971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.612 qpair failed and we were unable to recover it. 00:36:05.612 [2024-11-18 08:09:58.672752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.612 [2024-11-18 08:09:58.672838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.612 [2024-11-18 08:09:58.672862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.612 [2024-11-18 08:09:58.672876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.613 [2024-11-18 08:09:58.672887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.613 [2024-11-18 08:09:58.672917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.613 qpair failed and we were unable to recover it. 00:36:05.613 [2024-11-18 08:09:58.682785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.613 [2024-11-18 08:09:58.682878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.613 [2024-11-18 08:09:58.682903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.613 [2024-11-18 08:09:58.682917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.613 [2024-11-18 08:09:58.682928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.613 [2024-11-18 08:09:58.682958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.613 qpair failed and we were unable to recover it. 00:36:05.613 [2024-11-18 08:09:58.692814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.613 [2024-11-18 08:09:58.692909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.613 [2024-11-18 08:09:58.692935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.613 [2024-11-18 08:09:58.692949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.613 [2024-11-18 08:09:58.692961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.613 [2024-11-18 08:09:58.692997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.613 qpair failed and we were unable to recover it. 00:36:05.874 [2024-11-18 08:09:58.702879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.874 [2024-11-18 08:09:58.703008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.874 [2024-11-18 08:09:58.703034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.874 [2024-11-18 08:09:58.703049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.874 [2024-11-18 08:09:58.703061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.874 [2024-11-18 08:09:58.703092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-11-18 08:09:58.712915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.874 [2024-11-18 08:09:58.712995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.874 [2024-11-18 08:09:58.713022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.874 [2024-11-18 08:09:58.713036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.874 [2024-11-18 08:09:58.713047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.874 [2024-11-18 08:09:58.713078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.875 [2024-11-18 08:09:58.722936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.875 [2024-11-18 08:09:58.723042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.875 [2024-11-18 08:09:58.723067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.875 [2024-11-18 08:09:58.723081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.875 [2024-11-18 08:09:58.723093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.875 [2024-11-18 08:09:58.723123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-11-18 08:09:58.732983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.875 [2024-11-18 08:09:58.733089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.875 [2024-11-18 08:09:58.733114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.875 [2024-11-18 08:09:58.733128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.875 [2024-11-18 08:09:58.733140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.875 [2024-11-18 08:09:58.733169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-11-18 08:09:58.743086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.875 [2024-11-18 08:09:58.743171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.875 [2024-11-18 08:09:58.743197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.875 [2024-11-18 08:09:58.743211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.875 [2024-11-18 08:09:58.743223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.875 [2024-11-18 08:09:58.743275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-11-18 08:09:58.753049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.875 [2024-11-18 08:09:58.753156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.875 [2024-11-18 08:09:58.753182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.875 [2024-11-18 08:09:58.753195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.875 [2024-11-18 08:09:58.753207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.875 [2024-11-18 08:09:58.753237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-11-18 08:09:58.763042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.875 [2024-11-18 08:09:58.763140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.875 [2024-11-18 08:09:58.763166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.875 [2024-11-18 08:09:58.763179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.875 [2024-11-18 08:09:58.763191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.875 [2024-11-18 08:09:58.763220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-11-18 08:09:58.773045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.875 [2024-11-18 08:09:58.773160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.875 [2024-11-18 08:09:58.773186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.875 [2024-11-18 08:09:58.773200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.875 [2024-11-18 08:09:58.773212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.875 [2024-11-18 08:09:58.773241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-11-18 08:09:58.783052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.875 [2024-11-18 08:09:58.783182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.875 [2024-11-18 08:09:58.783213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.875 [2024-11-18 08:09:58.783229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.875 [2024-11-18 08:09:58.783241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.875 [2024-11-18 08:09:58.783271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-11-18 08:09:58.793149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.875 [2024-11-18 08:09:58.793247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.875 [2024-11-18 08:09:58.793272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.875 [2024-11-18 08:09:58.793286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.875 [2024-11-18 08:09:58.793298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.875 [2024-11-18 08:09:58.793328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-11-18 08:09:58.803131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.875 [2024-11-18 08:09:58.803223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.875 [2024-11-18 08:09:58.803249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.875 [2024-11-18 08:09:58.803263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.875 [2024-11-18 08:09:58.803276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.875 [2024-11-18 08:09:58.803306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-11-18 08:09:58.813188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.875 [2024-11-18 08:09:58.813280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.875 [2024-11-18 08:09:58.813304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.875 [2024-11-18 08:09:58.813318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.875 [2024-11-18 08:09:58.813329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.875 [2024-11-18 08:09:58.813359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-11-18 08:09:58.823206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.875 [2024-11-18 08:09:58.823289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.875 [2024-11-18 08:09:58.823313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.875 [2024-11-18 08:09:58.823326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.875 [2024-11-18 08:09:58.823344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.875 [2024-11-18 08:09:58.823374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-11-18 08:09:58.833210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.875 [2024-11-18 08:09:58.833349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.875 [2024-11-18 08:09:58.833378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.875 [2024-11-18 08:09:58.833392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.875 [2024-11-18 08:09:58.833404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.875 [2024-11-18 08:09:58.833435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-11-18 08:09:58.843349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.875 [2024-11-18 08:09:58.843468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.875 [2024-11-18 08:09:58.843505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.875 [2024-11-18 08:09:58.843524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.875 [2024-11-18 08:09:58.843536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.875 [2024-11-18 08:09:58.843567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-11-18 08:09:58.853305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.875 [2024-11-18 08:09:58.853396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.875 [2024-11-18 08:09:58.853426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.875 [2024-11-18 08:09:58.853441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.875 [2024-11-18 08:09:58.853454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.875 [2024-11-18 08:09:58.853484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-11-18 08:09:58.863340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.875 [2024-11-18 08:09:58.863422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.875 [2024-11-18 08:09:58.863448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.875 [2024-11-18 08:09:58.863462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.875 [2024-11-18 08:09:58.863473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.875 [2024-11-18 08:09:58.863524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-11-18 08:09:58.873421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.875 [2024-11-18 08:09:58.873514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.875 [2024-11-18 08:09:58.873547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.875 [2024-11-18 08:09:58.873562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.875 [2024-11-18 08:09:58.873573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.875 [2024-11-18 08:09:58.873603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-11-18 08:09:58.883366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.875 [2024-11-18 08:09:58.883460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.875 [2024-11-18 08:09:58.883485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.875 [2024-11-18 08:09:58.883509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.876 [2024-11-18 08:09:58.883522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.876 [2024-11-18 08:09:58.883552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-11-18 08:09:58.893424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.876 [2024-11-18 08:09:58.893548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.876 [2024-11-18 08:09:58.893574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.876 [2024-11-18 08:09:58.893588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.876 [2024-11-18 08:09:58.893600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.876 [2024-11-18 08:09:58.893630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-11-18 08:09:58.903457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.876 [2024-11-18 08:09:58.903591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.876 [2024-11-18 08:09:58.903618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.876 [2024-11-18 08:09:58.903632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.876 [2024-11-18 08:09:58.903644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.876 [2024-11-18 08:09:58.903674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-11-18 08:09:58.913559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.876 [2024-11-18 08:09:58.913686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.876 [2024-11-18 08:09:58.913717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.876 [2024-11-18 08:09:58.913732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.876 [2024-11-18 08:09:58.913744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.876 [2024-11-18 08:09:58.913774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-11-18 08:09:58.923618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.876 [2024-11-18 08:09:58.923734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.876 [2024-11-18 08:09:58.923760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.876 [2024-11-18 08:09:58.923774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.876 [2024-11-18 08:09:58.923785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.876 [2024-11-18 08:09:58.923829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-11-18 08:09:58.933570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.876 [2024-11-18 08:09:58.933664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.876 [2024-11-18 08:09:58.933690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.876 [2024-11-18 08:09:58.933704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.876 [2024-11-18 08:09:58.933716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.876 [2024-11-18 08:09:58.933746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-11-18 08:09:58.943564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.876 [2024-11-18 08:09:58.943654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.876 [2024-11-18 08:09:58.943678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.876 [2024-11-18 08:09:58.943691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.876 [2024-11-18 08:09:58.943703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.876 [2024-11-18 08:09:58.943733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-11-18 08:09:58.953598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.876 [2024-11-18 08:09:58.953687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.876 [2024-11-18 08:09:58.953722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.876 [2024-11-18 08:09:58.953740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.876 [2024-11-18 08:09:58.953758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:05.876 [2024-11-18 08:09:58.953789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.876 qpair failed and we were unable to recover it. 00:36:06.137 [2024-11-18 08:09:58.963708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.137 [2024-11-18 08:09:58.963803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.137 [2024-11-18 08:09:58.963830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.137 [2024-11-18 08:09:58.963844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.137 [2024-11-18 08:09:58.963856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.137 [2024-11-18 08:09:58.963886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.137 qpair failed and we were unable to recover it. 00:36:06.137 [2024-11-18 08:09:58.973717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.137 [2024-11-18 08:09:58.973803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.137 [2024-11-18 08:09:58.973829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.137 [2024-11-18 08:09:58.973844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.137 [2024-11-18 08:09:58.973856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.137 [2024-11-18 08:09:58.973886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.137 qpair failed and we were unable to recover it. 00:36:06.137 [2024-11-18 08:09:58.983686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.137 [2024-11-18 08:09:58.983815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.137 [2024-11-18 08:09:58.983841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.137 [2024-11-18 08:09:58.983855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.137 [2024-11-18 08:09:58.983867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.137 [2024-11-18 08:09:58.983896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.137 qpair failed and we were unable to recover it. 00:36:06.137 [2024-11-18 08:09:58.993687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.137 [2024-11-18 08:09:58.993809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.137 [2024-11-18 08:09:58.993834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.137 [2024-11-18 08:09:58.993849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.137 [2024-11-18 08:09:58.993860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.137 [2024-11-18 08:09:58.993890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.137 qpair failed and we were unable to recover it. 00:36:06.137 [2024-11-18 08:09:59.003767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.137 [2024-11-18 08:09:59.003857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.137 [2024-11-18 08:09:59.003883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.137 [2024-11-18 08:09:59.003897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.137 [2024-11-18 08:09:59.003909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.137 [2024-11-18 08:09:59.003939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.137 qpair failed and we were unable to recover it. 00:36:06.137 [2024-11-18 08:09:59.013746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.137 [2024-11-18 08:09:59.013833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.137 [2024-11-18 08:09:59.013857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.137 [2024-11-18 08:09:59.013871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.137 [2024-11-18 08:09:59.013882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.137 [2024-11-18 08:09:59.013912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.137 qpair failed and we were unable to recover it. 00:36:06.137 [2024-11-18 08:09:59.023865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.137 [2024-11-18 08:09:59.023947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.137 [2024-11-18 08:09:59.023972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.137 [2024-11-18 08:09:59.023986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.137 [2024-11-18 08:09:59.023997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.137 [2024-11-18 08:09:59.024027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.137 qpair failed and we were unable to recover it. 00:36:06.137 [2024-11-18 08:09:59.033781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.137 [2024-11-18 08:09:59.033867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.137 [2024-11-18 08:09:59.033890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.137 [2024-11-18 08:09:59.033904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.137 [2024-11-18 08:09:59.033916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.137 [2024-11-18 08:09:59.033946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.137 qpair failed and we were unable to recover it. 00:36:06.137 [2024-11-18 08:09:59.043843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.137 [2024-11-18 08:09:59.043952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.137 [2024-11-18 08:09:59.043986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.137 [2024-11-18 08:09:59.044002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.137 [2024-11-18 08:09:59.044014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.137 [2024-11-18 08:09:59.044043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.137 qpair failed and we were unable to recover it. 00:36:06.138 [2024-11-18 08:09:59.053855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.138 [2024-11-18 08:09:59.053935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.138 [2024-11-18 08:09:59.053960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.138 [2024-11-18 08:09:59.053974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.138 [2024-11-18 08:09:59.053986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.138 [2024-11-18 08:09:59.054018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.138 qpair failed and we were unable to recover it. 00:36:06.138 [2024-11-18 08:09:59.063900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.138 [2024-11-18 08:09:59.063986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.138 [2024-11-18 08:09:59.064010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.138 [2024-11-18 08:09:59.064024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.138 [2024-11-18 08:09:59.064035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.138 [2024-11-18 08:09:59.064064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.138 qpair failed and we were unable to recover it. 00:36:06.138 [2024-11-18 08:09:59.073968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.138 [2024-11-18 08:09:59.074054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.138 [2024-11-18 08:09:59.074080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.138 [2024-11-18 08:09:59.074094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.138 [2024-11-18 08:09:59.074105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.138 [2024-11-18 08:09:59.074135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.138 qpair failed and we were unable to recover it. 00:36:06.138 [2024-11-18 08:09:59.083989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.138 [2024-11-18 08:09:59.084086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.138 [2024-11-18 08:09:59.084117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.138 [2024-11-18 08:09:59.084147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.138 [2024-11-18 08:09:59.084161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.138 [2024-11-18 08:09:59.084193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.138 qpair failed and we were unable to recover it. 00:36:06.138 [2024-11-18 08:09:59.093988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.138 [2024-11-18 08:09:59.094106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.138 [2024-11-18 08:09:59.094134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.138 [2024-11-18 08:09:59.094148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.138 [2024-11-18 08:09:59.094160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.138 [2024-11-18 08:09:59.094190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.138 qpair failed and we were unable to recover it. 00:36:06.138 [2024-11-18 08:09:59.104024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.138 [2024-11-18 08:09:59.104162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.138 [2024-11-18 08:09:59.104189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.138 [2024-11-18 08:09:59.104203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.138 [2024-11-18 08:09:59.104215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.138 [2024-11-18 08:09:59.104245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.138 qpair failed and we were unable to recover it. 00:36:06.138 [2024-11-18 08:09:59.114040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.138 [2024-11-18 08:09:59.114163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.138 [2024-11-18 08:09:59.114189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.138 [2024-11-18 08:09:59.114203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.138 [2024-11-18 08:09:59.114215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.138 [2024-11-18 08:09:59.114245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.138 qpair failed and we were unable to recover it. 00:36:06.138 [2024-11-18 08:09:59.124045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.138 [2024-11-18 08:09:59.124137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.138 [2024-11-18 08:09:59.124167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.138 [2024-11-18 08:09:59.124181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.138 [2024-11-18 08:09:59.124193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.138 [2024-11-18 08:09:59.124229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.138 qpair failed and we were unable to recover it. 00:36:06.138 [2024-11-18 08:09:59.134099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.138 [2024-11-18 08:09:59.134189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.138 [2024-11-18 08:09:59.134214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.138 [2024-11-18 08:09:59.134228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.138 [2024-11-18 08:09:59.134240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.138 [2024-11-18 08:09:59.134270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.138 qpair failed and we were unable to recover it. 00:36:06.138 [2024-11-18 08:09:59.144185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.138 [2024-11-18 08:09:59.144266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.138 [2024-11-18 08:09:59.144291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.138 [2024-11-18 08:09:59.144305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.138 [2024-11-18 08:09:59.144317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.138 [2024-11-18 08:09:59.144347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.138 qpair failed and we were unable to recover it. 00:36:06.138 [2024-11-18 08:09:59.154174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.138 [2024-11-18 08:09:59.154286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.138 [2024-11-18 08:09:59.154312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.138 [2024-11-18 08:09:59.154326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.138 [2024-11-18 08:09:59.154337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.138 [2024-11-18 08:09:59.154367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.138 qpair failed and we were unable to recover it. 00:36:06.138 [2024-11-18 08:09:59.164249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.138 [2024-11-18 08:09:59.164345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.138 [2024-11-18 08:09:59.164370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.138 [2024-11-18 08:09:59.164384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.138 [2024-11-18 08:09:59.164396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.138 [2024-11-18 08:09:59.164426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.138 qpair failed and we were unable to recover it. 00:36:06.138 [2024-11-18 08:09:59.174192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.138 [2024-11-18 08:09:59.174308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.138 [2024-11-18 08:09:59.174335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.138 [2024-11-18 08:09:59.174349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.138 [2024-11-18 08:09:59.174361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.138 [2024-11-18 08:09:59.174391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.139 qpair failed and we were unable to recover it. 00:36:06.139 [2024-11-18 08:09:59.184220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.139 [2024-11-18 08:09:59.184302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.139 [2024-11-18 08:09:59.184328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.139 [2024-11-18 08:09:59.184344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.139 [2024-11-18 08:09:59.184356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.139 [2024-11-18 08:09:59.184386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.139 qpair failed and we were unable to recover it. 00:36:06.139 [2024-11-18 08:09:59.194241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.139 [2024-11-18 08:09:59.194326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.139 [2024-11-18 08:09:59.194351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.139 [2024-11-18 08:09:59.194365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.139 [2024-11-18 08:09:59.194376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.139 [2024-11-18 08:09:59.194406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.139 qpair failed and we were unable to recover it. 00:36:06.139 [2024-11-18 08:09:59.204280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.139 [2024-11-18 08:09:59.204369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.139 [2024-11-18 08:09:59.204393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.139 [2024-11-18 08:09:59.204407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.139 [2024-11-18 08:09:59.204419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.139 [2024-11-18 08:09:59.204448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.139 qpair failed and we were unable to recover it. 00:36:06.139 [2024-11-18 08:09:59.214286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.139 [2024-11-18 08:09:59.214389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.139 [2024-11-18 08:09:59.214414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.139 [2024-11-18 08:09:59.214433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.139 [2024-11-18 08:09:59.214446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.139 [2024-11-18 08:09:59.214476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.139 qpair failed and we were unable to recover it. 00:36:06.139 [2024-11-18 08:09:59.224326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.139 [2024-11-18 08:09:59.224417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.139 [2024-11-18 08:09:59.224443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.139 [2024-11-18 08:09:59.224457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.139 [2024-11-18 08:09:59.224470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.139 [2024-11-18 08:09:59.224512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.139 qpair failed and we were unable to recover it. 00:36:06.396 [2024-11-18 08:09:59.234363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.396 [2024-11-18 08:09:59.234457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.396 [2024-11-18 08:09:59.234484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.396 [2024-11-18 08:09:59.234507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.396 [2024-11-18 08:09:59.234520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.396 [2024-11-18 08:09:59.234551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.396 qpair failed and we were unable to recover it. 00:36:06.396 [2024-11-18 08:09:59.244540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.244657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.244686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.244701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.244713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.244743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.254463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.254597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.254623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.254637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.254649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.254685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.264474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.264564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.264588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.264602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.264614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.264658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.274508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.274606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.274632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.274646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.274658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.274688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.284566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.284663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.284688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.284701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.284713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.284744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.294626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.294733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.294761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.294775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.294786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.294817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.304605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.304726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.304753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.304767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.304779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.304808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.314648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.314769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.314794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.314808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.314820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.314850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.324701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.324797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.324823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.324837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.324848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.324890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.334697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.334780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.334812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.334835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.334856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.334889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.344686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.344811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.344843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.344858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.344870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.344900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.354712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.354796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.354821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.354835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.354847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.354877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.364841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.364933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.364959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.364973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.364984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.365014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.374779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.374874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.374898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.374912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.374923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.374953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.384781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.384884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.384909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.384922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.384939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.384970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.394810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.394934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.394960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.394975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.394986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.395016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.404864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.404956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.404981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.404995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.405007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.405036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.414901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.415027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.415052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.415065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.415077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.415107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.424977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.425058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.425082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.425096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.425108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.425151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.434950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.435032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.435056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.435069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.435081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.435111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.444992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.445096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.445122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.445136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.445148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.445190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.454972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.455056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.455080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.455094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.455105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.455135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.465087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.465175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.397 [2024-11-18 08:09:59.465199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.397 [2024-11-18 08:09:59.465213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.397 [2024-11-18 08:09:59.465225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.397 [2024-11-18 08:09:59.465254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.397 qpair failed and we were unable to recover it. 00:36:06.397 [2024-11-18 08:09:59.475079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.397 [2024-11-18 08:09:59.475173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.398 [2024-11-18 08:09:59.475204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.398 [2024-11-18 08:09:59.475218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.398 [2024-11-18 08:09:59.475230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.398 [2024-11-18 08:09:59.475260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.398 qpair failed and we were unable to recover it. 00:36:06.398 [2024-11-18 08:09:59.485096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.655 [2024-11-18 08:09:59.485221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.655 [2024-11-18 08:09:59.485249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.655 [2024-11-18 08:09:59.485263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.655 [2024-11-18 08:09:59.485275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.655 [2024-11-18 08:09:59.485305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.655 qpair failed and we were unable to recover it. 00:36:06.655 [2024-11-18 08:09:59.495106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.655 [2024-11-18 08:09:59.495233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.655 [2024-11-18 08:09:59.495259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.655 [2024-11-18 08:09:59.495273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.655 [2024-11-18 08:09:59.495285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.655 [2024-11-18 08:09:59.495316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.655 qpair failed and we were unable to recover it. 00:36:06.655 [2024-11-18 08:09:59.505125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.655 [2024-11-18 08:09:59.505257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.655 [2024-11-18 08:09:59.505283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.655 [2024-11-18 08:09:59.505297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.655 [2024-11-18 08:09:59.505309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.655 [2024-11-18 08:09:59.505338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.655 qpair failed and we were unable to recover it. 00:36:06.655 [2024-11-18 08:09:59.515247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.655 [2024-11-18 08:09:59.515378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.655 [2024-11-18 08:09:59.515404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.655 [2024-11-18 08:09:59.515418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.655 [2024-11-18 08:09:59.515436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.655 [2024-11-18 08:09:59.515480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.655 qpair failed and we were unable to recover it. 00:36:06.655 [2024-11-18 08:09:59.525214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.655 [2024-11-18 08:09:59.525325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.655 [2024-11-18 08:09:59.525351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.655 [2024-11-18 08:09:59.525366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.655 [2024-11-18 08:09:59.525377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.655 [2024-11-18 08:09:59.525408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.655 qpair failed and we were unable to recover it. 00:36:06.655 [2024-11-18 08:09:59.535257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.655 [2024-11-18 08:09:59.535343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.655 [2024-11-18 08:09:59.535369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.655 [2024-11-18 08:09:59.535382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.655 [2024-11-18 08:09:59.535394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.655 [2024-11-18 08:09:59.535424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.655 qpair failed and we were unable to recover it. 00:36:06.655 [2024-11-18 08:09:59.545281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.655 [2024-11-18 08:09:59.545373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.655 [2024-11-18 08:09:59.545400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.655 [2024-11-18 08:09:59.545418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.655 [2024-11-18 08:09:59.545432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.655 [2024-11-18 08:09:59.545463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.656 qpair failed and we were unable to recover it. 00:36:06.656 [2024-11-18 08:09:59.555289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.656 [2024-11-18 08:09:59.555375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.656 [2024-11-18 08:09:59.555400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.656 [2024-11-18 08:09:59.555413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.656 [2024-11-18 08:09:59.555425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.656 [2024-11-18 08:09:59.555455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.656 qpair failed and we were unable to recover it. 00:36:06.656 [2024-11-18 08:09:59.565321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.656 [2024-11-18 08:09:59.565411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.656 [2024-11-18 08:09:59.565436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.656 [2024-11-18 08:09:59.565449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.656 [2024-11-18 08:09:59.565461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.656 [2024-11-18 08:09:59.565501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.656 qpair failed and we were unable to recover it. 00:36:06.656 [2024-11-18 08:09:59.575458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.656 [2024-11-18 08:09:59.575551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.656 [2024-11-18 08:09:59.575578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.656 [2024-11-18 08:09:59.575592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.656 [2024-11-18 08:09:59.575604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.656 [2024-11-18 08:09:59.575633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.656 qpair failed and we were unable to recover it. 00:36:06.656 [2024-11-18 08:09:59.585452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.656 [2024-11-18 08:09:59.585556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.656 [2024-11-18 08:09:59.585589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.656 [2024-11-18 08:09:59.585605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.656 [2024-11-18 08:09:59.585617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.656 [2024-11-18 08:09:59.585649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.656 qpair failed and we were unable to recover it. 00:36:06.656 [2024-11-18 08:09:59.595525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.656 [2024-11-18 08:09:59.595651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.656 [2024-11-18 08:09:59.595678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.656 [2024-11-18 08:09:59.595692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.656 [2024-11-18 08:09:59.595704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.656 [2024-11-18 08:09:59.595734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.656 qpair failed and we were unable to recover it. 00:36:06.656 [2024-11-18 08:09:59.605431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.656 [2024-11-18 08:09:59.605568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.656 [2024-11-18 08:09:59.605594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.656 [2024-11-18 08:09:59.605608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.656 [2024-11-18 08:09:59.605620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.656 [2024-11-18 08:09:59.605650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.656 qpair failed and we were unable to recover it. 00:36:06.656 [2024-11-18 08:09:59.615447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.656 [2024-11-18 08:09:59.615534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.656 [2024-11-18 08:09:59.615558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.656 [2024-11-18 08:09:59.615572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.656 [2024-11-18 08:09:59.615584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.656 [2024-11-18 08:09:59.615614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.656 qpair failed and we were unable to recover it. 00:36:06.656 [2024-11-18 08:09:59.625622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.656 [2024-11-18 08:09:59.625748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.656 [2024-11-18 08:09:59.625774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.656 [2024-11-18 08:09:59.625788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.656 [2024-11-18 08:09:59.625800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.656 [2024-11-18 08:09:59.625843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.656 qpair failed and we were unable to recover it. 00:36:06.656 [2024-11-18 08:09:59.635483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.656 [2024-11-18 08:09:59.635575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.656 [2024-11-18 08:09:59.635601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.656 [2024-11-18 08:09:59.635615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.656 [2024-11-18 08:09:59.635626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.656 [2024-11-18 08:09:59.635656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.656 qpair failed and we were unable to recover it. 00:36:06.656 [2024-11-18 08:09:59.645542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.656 [2024-11-18 08:09:59.645637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.656 [2024-11-18 08:09:59.645663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.656 [2024-11-18 08:09:59.645682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.656 [2024-11-18 08:09:59.645695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.656 [2024-11-18 08:09:59.645725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.656 qpair failed and we were unable to recover it. 00:36:06.656 [2024-11-18 08:09:59.655558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.656 [2024-11-18 08:09:59.655673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.656 [2024-11-18 08:09:59.655699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.656 [2024-11-18 08:09:59.655713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.656 [2024-11-18 08:09:59.655725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.656 [2024-11-18 08:09:59.655755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.656 qpair failed and we were unable to recover it. 00:36:06.656 [2024-11-18 08:09:59.665590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.656 [2024-11-18 08:09:59.665669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.656 [2024-11-18 08:09:59.665693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.656 [2024-11-18 08:09:59.665706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.656 [2024-11-18 08:09:59.665718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.656 [2024-11-18 08:09:59.665748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.656 qpair failed and we were unable to recover it. 00:36:06.656 [2024-11-18 08:09:59.675624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.656 [2024-11-18 08:09:59.675749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.656 [2024-11-18 08:09:59.675775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.656 [2024-11-18 08:09:59.675788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.656 [2024-11-18 08:09:59.675800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.656 [2024-11-18 08:09:59.675830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.656 qpair failed and we were unable to recover it. 00:36:06.657 [2024-11-18 08:09:59.685685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.657 [2024-11-18 08:09:59.685811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.657 [2024-11-18 08:09:59.685836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.657 [2024-11-18 08:09:59.685850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.657 [2024-11-18 08:09:59.685862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.657 [2024-11-18 08:09:59.685898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.657 qpair failed and we were unable to recover it. 00:36:06.657 [2024-11-18 08:09:59.695785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.657 [2024-11-18 08:09:59.695883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.657 [2024-11-18 08:09:59.695911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.657 [2024-11-18 08:09:59.695928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.657 [2024-11-18 08:09:59.695941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.657 [2024-11-18 08:09:59.695978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.657 qpair failed and we were unable to recover it. 00:36:06.657 [2024-11-18 08:09:59.705702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.657 [2024-11-18 08:09:59.705831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.657 [2024-11-18 08:09:59.705857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.657 [2024-11-18 08:09:59.705871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.657 [2024-11-18 08:09:59.705882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.657 [2024-11-18 08:09:59.705912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.657 qpair failed and we were unable to recover it. 00:36:06.657 [2024-11-18 08:09:59.715718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.657 [2024-11-18 08:09:59.715804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.657 [2024-11-18 08:09:59.715830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.657 [2024-11-18 08:09:59.715844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.657 [2024-11-18 08:09:59.715856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.657 [2024-11-18 08:09:59.715886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.657 qpair failed and we were unable to recover it. 00:36:06.657 [2024-11-18 08:09:59.725851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.657 [2024-11-18 08:09:59.725974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.657 [2024-11-18 08:09:59.726000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.657 [2024-11-18 08:09:59.726014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.657 [2024-11-18 08:09:59.726026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.657 [2024-11-18 08:09:59.726068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.657 qpair failed and we were unable to recover it. 00:36:06.657 [2024-11-18 08:09:59.735792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.657 [2024-11-18 08:09:59.735918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.657 [2024-11-18 08:09:59.735943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.657 [2024-11-18 08:09:59.735957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.657 [2024-11-18 08:09:59.735969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.657 [2024-11-18 08:09:59.735999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.657 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-18 08:09:59.745829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.917 [2024-11-18 08:09:59.745960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.917 [2024-11-18 08:09:59.745990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.917 [2024-11-18 08:09:59.746006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.917 [2024-11-18 08:09:59.746019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.917 [2024-11-18 08:09:59.746050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-18 08:09:59.755826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.917 [2024-11-18 08:09:59.755913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.917 [2024-11-18 08:09:59.755937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.917 [2024-11-18 08:09:59.755951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.917 [2024-11-18 08:09:59.755963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.917 [2024-11-18 08:09:59.755993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-18 08:09:59.765962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.917 [2024-11-18 08:09:59.766054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.917 [2024-11-18 08:09:59.766079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.917 [2024-11-18 08:09:59.766094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.917 [2024-11-18 08:09:59.766105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.917 [2024-11-18 08:09:59.766135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-18 08:09:59.775882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.917 [2024-11-18 08:09:59.775966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.917 [2024-11-18 08:09:59.775990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.917 [2024-11-18 08:09:59.776009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.917 [2024-11-18 08:09:59.776021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.917 [2024-11-18 08:09:59.776051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-18 08:09:59.785938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.917 [2024-11-18 08:09:59.786027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.917 [2024-11-18 08:09:59.786056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.917 [2024-11-18 08:09:59.786070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.917 [2024-11-18 08:09:59.786081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.917 [2024-11-18 08:09:59.786111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-18 08:09:59.796043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.917 [2024-11-18 08:09:59.796139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.917 [2024-11-18 08:09:59.796165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.917 [2024-11-18 08:09:59.796179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.917 [2024-11-18 08:09:59.796191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.917 [2024-11-18 08:09:59.796220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-18 08:09:59.806013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.917 [2024-11-18 08:09:59.806129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.917 [2024-11-18 08:09:59.806155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.917 [2024-11-18 08:09:59.806169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.917 [2024-11-18 08:09:59.806181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.917 [2024-11-18 08:09:59.806210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-18 08:09:59.816099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.917 [2024-11-18 08:09:59.816184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.917 [2024-11-18 08:09:59.816210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.917 [2024-11-18 08:09:59.816223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.917 [2024-11-18 08:09:59.816235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.917 [2024-11-18 08:09:59.816274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-18 08:09:59.826043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.918 [2024-11-18 08:09:59.826177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.918 [2024-11-18 08:09:59.826203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.918 [2024-11-18 08:09:59.826216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.918 [2024-11-18 08:09:59.826229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.918 [2024-11-18 08:09:59.826258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-18 08:09:59.836105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.918 [2024-11-18 08:09:59.836218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.918 [2024-11-18 08:09:59.836250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.918 [2024-11-18 08:09:59.836272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.918 [2024-11-18 08:09:59.836285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.918 [2024-11-18 08:09:59.836317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-18 08:09:59.846103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.918 [2024-11-18 08:09:59.846195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.918 [2024-11-18 08:09:59.846220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.918 [2024-11-18 08:09:59.846234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.918 [2024-11-18 08:09:59.846246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.918 [2024-11-18 08:09:59.846276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-18 08:09:59.856144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.918 [2024-11-18 08:09:59.856238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.918 [2024-11-18 08:09:59.856264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.918 [2024-11-18 08:09:59.856278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.918 [2024-11-18 08:09:59.856289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.918 [2024-11-18 08:09:59.856319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-18 08:09:59.866183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.918 [2024-11-18 08:09:59.866263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.918 [2024-11-18 08:09:59.866289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.918 [2024-11-18 08:09:59.866302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.918 [2024-11-18 08:09:59.866314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.918 [2024-11-18 08:09:59.866356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-18 08:09:59.876178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.918 [2024-11-18 08:09:59.876269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.918 [2024-11-18 08:09:59.876295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.918 [2024-11-18 08:09:59.876308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.918 [2024-11-18 08:09:59.876320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.918 [2024-11-18 08:09:59.876350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-18 08:09:59.886269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.918 [2024-11-18 08:09:59.886384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.918 [2024-11-18 08:09:59.886413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.918 [2024-11-18 08:09:59.886428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.918 [2024-11-18 08:09:59.886440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.918 [2024-11-18 08:09:59.886472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-18 08:09:59.896281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.918 [2024-11-18 08:09:59.896406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.918 [2024-11-18 08:09:59.896433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.918 [2024-11-18 08:09:59.896447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.918 [2024-11-18 08:09:59.896459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.918 [2024-11-18 08:09:59.896497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-18 08:09:59.906279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.918 [2024-11-18 08:09:59.906401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.918 [2024-11-18 08:09:59.906432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.918 [2024-11-18 08:09:59.906446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.918 [2024-11-18 08:09:59.906458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.918 [2024-11-18 08:09:59.906496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-18 08:09:59.916397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.918 [2024-11-18 08:09:59.916480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.918 [2024-11-18 08:09:59.916512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.918 [2024-11-18 08:09:59.916527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.918 [2024-11-18 08:09:59.916539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.918 [2024-11-18 08:09:59.916583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-18 08:09:59.926378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.918 [2024-11-18 08:09:59.926487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.918 [2024-11-18 08:09:59.926521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.918 [2024-11-18 08:09:59.926536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.918 [2024-11-18 08:09:59.926547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.918 [2024-11-18 08:09:59.926577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-18 08:09:59.936361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.918 [2024-11-18 08:09:59.936439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.918 [2024-11-18 08:09:59.936463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.918 [2024-11-18 08:09:59.936477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.918 [2024-11-18 08:09:59.936496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.918 [2024-11-18 08:09:59.936529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-18 08:09:59.946374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.918 [2024-11-18 08:09:59.946451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.918 [2024-11-18 08:09:59.946475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.918 [2024-11-18 08:09:59.946488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.918 [2024-11-18 08:09:59.946519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.918 [2024-11-18 08:09:59.946551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-18 08:09:59.956438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.919 [2024-11-18 08:09:59.956563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.919 [2024-11-18 08:09:59.956593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.919 [2024-11-18 08:09:59.956609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.919 [2024-11-18 08:09:59.956621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.919 [2024-11-18 08:09:59.956651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-18 08:09:59.966573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.919 [2024-11-18 08:09:59.966694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.919 [2024-11-18 08:09:59.966720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.919 [2024-11-18 08:09:59.966734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.919 [2024-11-18 08:09:59.966745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.919 [2024-11-18 08:09:59.966775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-18 08:09:59.976608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.919 [2024-11-18 08:09:59.976689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.919 [2024-11-18 08:09:59.976713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.919 [2024-11-18 08:09:59.976727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.919 [2024-11-18 08:09:59.976738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.919 [2024-11-18 08:09:59.976793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-18 08:09:59.986501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.919 [2024-11-18 08:09:59.986612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.919 [2024-11-18 08:09:59.986638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.919 [2024-11-18 08:09:59.986653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.919 [2024-11-18 08:09:59.986666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.919 [2024-11-18 08:09:59.986696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-18 08:09:59.996562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.919 [2024-11-18 08:09:59.996668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.919 [2024-11-18 08:09:59.996695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.919 [2024-11-18 08:09:59.996709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.919 [2024-11-18 08:09:59.996721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:06.919 [2024-11-18 08:09:59.996750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.919 qpair failed and we were unable to recover it. 00:36:07.179 [2024-11-18 08:10:00.006668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.179 [2024-11-18 08:10:00.006761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.179 [2024-11-18 08:10:00.006801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.179 [2024-11-18 08:10:00.006817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.179 [2024-11-18 08:10:00.006829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.179 [2024-11-18 08:10:00.006884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.179 qpair failed and we were unable to recover it. 00:36:07.179 [2024-11-18 08:10:00.016719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.179 [2024-11-18 08:10:00.016877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.179 [2024-11-18 08:10:00.016920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.179 [2024-11-18 08:10:00.016944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.179 [2024-11-18 08:10:00.016968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.179 [2024-11-18 08:10:00.017021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.179 qpair failed and we were unable to recover it. 00:36:07.179 [2024-11-18 08:10:00.026654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.179 [2024-11-18 08:10:00.026753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.179 [2024-11-18 08:10:00.026791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.179 [2024-11-18 08:10:00.026806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.179 [2024-11-18 08:10:00.026818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.179 [2024-11-18 08:10:00.026848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.179 qpair failed and we were unable to recover it. 00:36:07.179 [2024-11-18 08:10:00.036678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.179 [2024-11-18 08:10:00.036761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.179 [2024-11-18 08:10:00.036807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.179 [2024-11-18 08:10:00.036823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.179 [2024-11-18 08:10:00.036835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.179 [2024-11-18 08:10:00.036865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.179 qpair failed and we were unable to recover it. 00:36:07.179 [2024-11-18 08:10:00.046737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.179 [2024-11-18 08:10:00.046834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.179 [2024-11-18 08:10:00.046866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.179 [2024-11-18 08:10:00.046880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.179 [2024-11-18 08:10:00.046892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.179 [2024-11-18 08:10:00.046924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.179 qpair failed and we were unable to recover it. 00:36:07.179 [2024-11-18 08:10:00.056850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.179 [2024-11-18 08:10:00.056979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.179 [2024-11-18 08:10:00.057005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.179 [2024-11-18 08:10:00.057019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.179 [2024-11-18 08:10:00.057031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.179 [2024-11-18 08:10:00.057062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.179 qpair failed and we were unable to recover it. 00:36:07.179 [2024-11-18 08:10:00.066768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.179 [2024-11-18 08:10:00.066871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.179 [2024-11-18 08:10:00.066900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.179 [2024-11-18 08:10:00.066914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.179 [2024-11-18 08:10:00.066927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.179 [2024-11-18 08:10:00.066957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.179 qpair failed and we were unable to recover it. 00:36:07.179 [2024-11-18 08:10:00.076839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.180 [2024-11-18 08:10:00.076964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.180 [2024-11-18 08:10:00.076990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.180 [2024-11-18 08:10:00.077004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.180 [2024-11-18 08:10:00.077022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.180 [2024-11-18 08:10:00.077054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.180 qpair failed and we were unable to recover it. 00:36:07.180 [2024-11-18 08:10:00.086893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.180 [2024-11-18 08:10:00.086987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.180 [2024-11-18 08:10:00.087018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.180 [2024-11-18 08:10:00.087041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.180 [2024-11-18 08:10:00.087064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.180 [2024-11-18 08:10:00.087110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.180 qpair failed and we were unable to recover it. 00:36:07.180 [2024-11-18 08:10:00.096871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.180 [2024-11-18 08:10:00.096993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.180 [2024-11-18 08:10:00.097020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.180 [2024-11-18 08:10:00.097034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.180 [2024-11-18 08:10:00.097046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.180 [2024-11-18 08:10:00.097076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.180 qpair failed and we were unable to recover it. 00:36:07.180 [2024-11-18 08:10:00.106864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.180 [2024-11-18 08:10:00.106986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.180 [2024-11-18 08:10:00.107012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.180 [2024-11-18 08:10:00.107026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.180 [2024-11-18 08:10:00.107038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.180 [2024-11-18 08:10:00.107068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.180 qpair failed and we were unable to recover it. 00:36:07.180 [2024-11-18 08:10:00.116903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.180 [2024-11-18 08:10:00.116993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.180 [2024-11-18 08:10:00.117019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.180 [2024-11-18 08:10:00.117034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.180 [2024-11-18 08:10:00.117046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.180 [2024-11-18 08:10:00.117076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.180 qpair failed and we were unable to recover it. 00:36:07.180 [2024-11-18 08:10:00.126943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.180 [2024-11-18 08:10:00.127031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.180 [2024-11-18 08:10:00.127057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.180 [2024-11-18 08:10:00.127073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.180 [2024-11-18 08:10:00.127085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.180 [2024-11-18 08:10:00.127115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.180 qpair failed and we were unable to recover it. 00:36:07.180 [2024-11-18 08:10:00.137032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.180 [2024-11-18 08:10:00.137116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.180 [2024-11-18 08:10:00.137141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.180 [2024-11-18 08:10:00.137163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.180 [2024-11-18 08:10:00.137175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.180 [2024-11-18 08:10:00.137205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.180 qpair failed and we were unable to recover it. 00:36:07.180 [2024-11-18 08:10:00.147060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.180 [2024-11-18 08:10:00.147161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.180 [2024-11-18 08:10:00.147187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.180 [2024-11-18 08:10:00.147200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.180 [2024-11-18 08:10:00.147212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.180 [2024-11-18 08:10:00.147243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.180 qpair failed and we were unable to recover it. 00:36:07.180 [2024-11-18 08:10:00.157002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.180 [2024-11-18 08:10:00.157086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.180 [2024-11-18 08:10:00.157112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.180 [2024-11-18 08:10:00.157125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.180 [2024-11-18 08:10:00.157137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.180 [2024-11-18 08:10:00.157167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.180 qpair failed and we were unable to recover it. 00:36:07.180 [2024-11-18 08:10:00.167132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.180 [2024-11-18 08:10:00.167262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.180 [2024-11-18 08:10:00.167296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.180 [2024-11-18 08:10:00.167316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.180 [2024-11-18 08:10:00.167330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.180 [2024-11-18 08:10:00.167372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.180 qpair failed and we were unable to recover it. 00:36:07.180 [2024-11-18 08:10:00.177123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.180 [2024-11-18 08:10:00.177228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.180 [2024-11-18 08:10:00.177254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.180 [2024-11-18 08:10:00.177268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.180 [2024-11-18 08:10:00.177280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.180 [2024-11-18 08:10:00.177310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.180 qpair failed and we were unable to recover it. 00:36:07.180 [2024-11-18 08:10:00.187092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.180 [2024-11-18 08:10:00.187179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.181 [2024-11-18 08:10:00.187205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.181 [2024-11-18 08:10:00.187219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.181 [2024-11-18 08:10:00.187231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.181 [2024-11-18 08:10:00.187261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.181 qpair failed and we were unable to recover it. 00:36:07.181 [2024-11-18 08:10:00.197110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.181 [2024-11-18 08:10:00.197195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.181 [2024-11-18 08:10:00.197221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.181 [2024-11-18 08:10:00.197235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.181 [2024-11-18 08:10:00.197247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.181 [2024-11-18 08:10:00.197277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.181 qpair failed and we were unable to recover it. 00:36:07.181 [2024-11-18 08:10:00.207201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.181 [2024-11-18 08:10:00.207301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.181 [2024-11-18 08:10:00.207327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.181 [2024-11-18 08:10:00.207346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.181 [2024-11-18 08:10:00.207359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.181 [2024-11-18 08:10:00.207389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.181 qpair failed and we were unable to recover it. 00:36:07.181 [2024-11-18 08:10:00.217163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.181 [2024-11-18 08:10:00.217248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.181 [2024-11-18 08:10:00.217274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.181 [2024-11-18 08:10:00.217287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.181 [2024-11-18 08:10:00.217299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.181 [2024-11-18 08:10:00.217329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.181 qpair failed and we were unable to recover it. 00:36:07.181 [2024-11-18 08:10:00.227210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.181 [2024-11-18 08:10:00.227307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.181 [2024-11-18 08:10:00.227332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.181 [2024-11-18 08:10:00.227347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.181 [2024-11-18 08:10:00.227358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.181 [2024-11-18 08:10:00.227387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.181 qpair failed and we were unable to recover it. 00:36:07.181 [2024-11-18 08:10:00.237236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.181 [2024-11-18 08:10:00.237319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.181 [2024-11-18 08:10:00.237346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.181 [2024-11-18 08:10:00.237359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.181 [2024-11-18 08:10:00.237371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.181 [2024-11-18 08:10:00.237401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.181 qpair failed and we were unable to recover it. 00:36:07.181 [2024-11-18 08:10:00.247246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.181 [2024-11-18 08:10:00.247333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.181 [2024-11-18 08:10:00.247359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.181 [2024-11-18 08:10:00.247373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.181 [2024-11-18 08:10:00.247385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.181 [2024-11-18 08:10:00.247420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.181 qpair failed and we were unable to recover it. 00:36:07.181 [2024-11-18 08:10:00.257288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.181 [2024-11-18 08:10:00.257403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.181 [2024-11-18 08:10:00.257428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.181 [2024-11-18 08:10:00.257442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.181 [2024-11-18 08:10:00.257454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.181 [2024-11-18 08:10:00.257484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.181 qpair failed and we were unable to recover it. 00:36:07.441 [2024-11-18 08:10:00.267317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.441 [2024-11-18 08:10:00.267409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.441 [2024-11-18 08:10:00.267436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.441 [2024-11-18 08:10:00.267450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.441 [2024-11-18 08:10:00.267461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.441 [2024-11-18 08:10:00.267501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.441 qpair failed and we were unable to recover it. 00:36:07.441 [2024-11-18 08:10:00.277356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.441 [2024-11-18 08:10:00.277437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.441 [2024-11-18 08:10:00.277463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.441 [2024-11-18 08:10:00.277478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.441 [2024-11-18 08:10:00.277496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.441 [2024-11-18 08:10:00.277530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.441 qpair failed and we were unable to recover it. 00:36:07.441 [2024-11-18 08:10:00.287387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.441 [2024-11-18 08:10:00.287478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.441 [2024-11-18 08:10:00.287512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.441 [2024-11-18 08:10:00.287538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.441 [2024-11-18 08:10:00.287550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.441 [2024-11-18 08:10:00.287581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.441 qpair failed and we were unable to recover it. 00:36:07.441 [2024-11-18 08:10:00.297384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.441 [2024-11-18 08:10:00.297471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.441 [2024-11-18 08:10:00.297506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.441 [2024-11-18 08:10:00.297522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.441 [2024-11-18 08:10:00.297534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.441 [2024-11-18 08:10:00.297564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.441 qpair failed and we were unable to recover it. 00:36:07.441 [2024-11-18 08:10:00.307534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.441 [2024-11-18 08:10:00.307625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.441 [2024-11-18 08:10:00.307652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.441 [2024-11-18 08:10:00.307668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.441 [2024-11-18 08:10:00.307680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.441 [2024-11-18 08:10:00.307709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.441 qpair failed and we were unable to recover it. 00:36:07.441 [2024-11-18 08:10:00.317475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.441 [2024-11-18 08:10:00.317575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.441 [2024-11-18 08:10:00.317601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.441 [2024-11-18 08:10:00.317616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.441 [2024-11-18 08:10:00.317628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.441 [2024-11-18 08:10:00.317658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.441 qpair failed and we were unable to recover it. 00:36:07.441 [2024-11-18 08:10:00.327505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.441 [2024-11-18 08:10:00.327601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.441 [2024-11-18 08:10:00.327627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.441 [2024-11-18 08:10:00.327642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.441 [2024-11-18 08:10:00.327654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.441 [2024-11-18 08:10:00.327683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.441 qpair failed and we were unable to recover it. 00:36:07.442 [2024-11-18 08:10:00.337638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.442 [2024-11-18 08:10:00.337740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.442 [2024-11-18 08:10:00.337778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.442 [2024-11-18 08:10:00.337795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.442 [2024-11-18 08:10:00.337808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.442 [2024-11-18 08:10:00.337853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.442 qpair failed and we were unable to recover it. 00:36:07.442 [2024-11-18 08:10:00.347559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.442 [2024-11-18 08:10:00.347661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.442 [2024-11-18 08:10:00.347687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.442 [2024-11-18 08:10:00.347702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.442 [2024-11-18 08:10:00.347714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.442 [2024-11-18 08:10:00.347745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.442 qpair failed and we were unable to recover it. 00:36:07.442 [2024-11-18 08:10:00.357588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.442 [2024-11-18 08:10:00.357677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.442 [2024-11-18 08:10:00.357703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.442 [2024-11-18 08:10:00.357717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.442 [2024-11-18 08:10:00.357729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.442 [2024-11-18 08:10:00.357759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.442 qpair failed and we were unable to recover it. 00:36:07.442 [2024-11-18 08:10:00.367712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.442 [2024-11-18 08:10:00.367806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.442 [2024-11-18 08:10:00.367832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.442 [2024-11-18 08:10:00.367846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.442 [2024-11-18 08:10:00.367858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.442 [2024-11-18 08:10:00.367887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.442 qpair failed and we were unable to recover it. 00:36:07.442 [2024-11-18 08:10:00.377624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.442 [2024-11-18 08:10:00.377722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.442 [2024-11-18 08:10:00.377748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.442 [2024-11-18 08:10:00.377762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.442 [2024-11-18 08:10:00.377774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.442 [2024-11-18 08:10:00.377809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.442 qpair failed and we were unable to recover it. 00:36:07.442 [2024-11-18 08:10:00.387680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.442 [2024-11-18 08:10:00.387773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.442 [2024-11-18 08:10:00.387797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.442 [2024-11-18 08:10:00.387810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.442 [2024-11-18 08:10:00.387822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.442 [2024-11-18 08:10:00.387851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.442 qpair failed and we were unable to recover it. 00:36:07.442 [2024-11-18 08:10:00.397671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.442 [2024-11-18 08:10:00.397758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.442 [2024-11-18 08:10:00.397788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.442 [2024-11-18 08:10:00.397803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.442 [2024-11-18 08:10:00.397815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.442 [2024-11-18 08:10:00.397845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.442 qpair failed and we were unable to recover it. 00:36:07.442 [2024-11-18 08:10:00.407731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.442 [2024-11-18 08:10:00.407847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.442 [2024-11-18 08:10:00.407873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.442 [2024-11-18 08:10:00.407888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.442 [2024-11-18 08:10:00.407900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.442 [2024-11-18 08:10:00.407930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.442 qpair failed and we were unable to recover it. 00:36:07.442 [2024-11-18 08:10:00.417794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.442 [2024-11-18 08:10:00.417874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.442 [2024-11-18 08:10:00.417900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.442 [2024-11-18 08:10:00.417913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.442 [2024-11-18 08:10:00.417925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.442 [2024-11-18 08:10:00.417955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.442 qpair failed and we were unable to recover it. 00:36:07.442 [2024-11-18 08:10:00.427782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.442 [2024-11-18 08:10:00.427881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.442 [2024-11-18 08:10:00.427907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.442 [2024-11-18 08:10:00.427921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.442 [2024-11-18 08:10:00.427932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.442 [2024-11-18 08:10:00.427962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.442 qpair failed and we were unable to recover it. 00:36:07.442 [2024-11-18 08:10:00.437788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.442 [2024-11-18 08:10:00.437866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.442 [2024-11-18 08:10:00.437892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.442 [2024-11-18 08:10:00.437906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.442 [2024-11-18 08:10:00.437917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.442 [2024-11-18 08:10:00.437953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.442 qpair failed and we were unable to recover it. 00:36:07.442 [2024-11-18 08:10:00.447904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.442 [2024-11-18 08:10:00.448018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.442 [2024-11-18 08:10:00.448045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.442 [2024-11-18 08:10:00.448059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.442 [2024-11-18 08:10:00.448071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.442 [2024-11-18 08:10:00.448113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.442 qpair failed and we were unable to recover it. 00:36:07.442 [2024-11-18 08:10:00.457843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.442 [2024-11-18 08:10:00.457925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.442 [2024-11-18 08:10:00.457951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.442 [2024-11-18 08:10:00.457965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.442 [2024-11-18 08:10:00.457977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.443 [2024-11-18 08:10:00.458007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.443 qpair failed and we were unable to recover it. 00:36:07.443 [2024-11-18 08:10:00.467946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.443 [2024-11-18 08:10:00.468039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.443 [2024-11-18 08:10:00.468069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.443 [2024-11-18 08:10:00.468084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.443 [2024-11-18 08:10:00.468096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.443 [2024-11-18 08:10:00.468126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.443 qpair failed and we were unable to recover it. 00:36:07.443 [2024-11-18 08:10:00.477882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.443 [2024-11-18 08:10:00.477965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.443 [2024-11-18 08:10:00.477991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.443 [2024-11-18 08:10:00.478005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.443 [2024-11-18 08:10:00.478016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.443 [2024-11-18 08:10:00.478046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.443 qpair failed and we were unable to recover it. 00:36:07.443 [2024-11-18 08:10:00.488011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.443 [2024-11-18 08:10:00.488097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.443 [2024-11-18 08:10:00.488122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.443 [2024-11-18 08:10:00.488137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.443 [2024-11-18 08:10:00.488148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.443 [2024-11-18 08:10:00.488178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.443 qpair failed and we were unable to recover it. 00:36:07.443 [2024-11-18 08:10:00.498003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.443 [2024-11-18 08:10:00.498134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.443 [2024-11-18 08:10:00.498160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.443 [2024-11-18 08:10:00.498174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.443 [2024-11-18 08:10:00.498186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.443 [2024-11-18 08:10:00.498215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.443 qpair failed and we were unable to recover it. 00:36:07.443 [2024-11-18 08:10:00.508004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.443 [2024-11-18 08:10:00.508098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.443 [2024-11-18 08:10:00.508124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.443 [2024-11-18 08:10:00.508138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.443 [2024-11-18 08:10:00.508156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.443 [2024-11-18 08:10:00.508186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.443 qpair failed and we were unable to recover it. 00:36:07.443 [2024-11-18 08:10:00.518006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.443 [2024-11-18 08:10:00.518131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.443 [2024-11-18 08:10:00.518157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.443 [2024-11-18 08:10:00.518171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.443 [2024-11-18 08:10:00.518182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.443 [2024-11-18 08:10:00.518212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.443 qpair failed and we were unable to recover it. 00:36:07.443 [2024-11-18 08:10:00.528147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.443 [2024-11-18 08:10:00.528244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.443 [2024-11-18 08:10:00.528274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.443 [2024-11-18 08:10:00.528290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.443 [2024-11-18 08:10:00.528302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.443 [2024-11-18 08:10:00.528333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.443 qpair failed and we were unable to recover it. 00:36:07.703 [2024-11-18 08:10:00.538227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.703 [2024-11-18 08:10:00.538365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.703 [2024-11-18 08:10:00.538392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.703 [2024-11-18 08:10:00.538406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.703 [2024-11-18 08:10:00.538419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.703 [2024-11-18 08:10:00.538449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.703 qpair failed and we were unable to recover it. 00:36:07.703 [2024-11-18 08:10:00.548199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.703 [2024-11-18 08:10:00.548293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.703 [2024-11-18 08:10:00.548320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.703 [2024-11-18 08:10:00.548334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.703 [2024-11-18 08:10:00.548346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.703 [2024-11-18 08:10:00.548390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.703 qpair failed and we were unable to recover it. 00:36:07.703 [2024-11-18 08:10:00.558187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.703 [2024-11-18 08:10:00.558275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.703 [2024-11-18 08:10:00.558305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.703 [2024-11-18 08:10:00.558319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.703 [2024-11-18 08:10:00.558331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.703 [2024-11-18 08:10:00.558362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.703 qpair failed and we were unable to recover it. 00:36:07.703 [2024-11-18 08:10:00.568210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.703 [2024-11-18 08:10:00.568305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.703 [2024-11-18 08:10:00.568330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.703 [2024-11-18 08:10:00.568344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.703 [2024-11-18 08:10:00.568356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.703 [2024-11-18 08:10:00.568385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.703 qpair failed and we were unable to recover it. 00:36:07.703 [2024-11-18 08:10:00.578185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.703 [2024-11-18 08:10:00.578287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.703 [2024-11-18 08:10:00.578312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.703 [2024-11-18 08:10:00.578326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.703 [2024-11-18 08:10:00.578338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.703 [2024-11-18 08:10:00.578367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.703 qpair failed and we were unable to recover it. 00:36:07.703 [2024-11-18 08:10:00.588250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.703 [2024-11-18 08:10:00.588353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.703 [2024-11-18 08:10:00.588386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.703 [2024-11-18 08:10:00.588409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.703 [2024-11-18 08:10:00.588422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.703 [2024-11-18 08:10:00.588454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.703 qpair failed and we were unable to recover it. 00:36:07.703 [2024-11-18 08:10:00.598290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.703 [2024-11-18 08:10:00.598376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.703 [2024-11-18 08:10:00.598409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.703 [2024-11-18 08:10:00.598425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.703 [2024-11-18 08:10:00.598437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.703 [2024-11-18 08:10:00.598467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.703 qpair failed and we were unable to recover it. 00:36:07.703 [2024-11-18 08:10:00.608276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.703 [2024-11-18 08:10:00.608366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.703 [2024-11-18 08:10:00.608392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.703 [2024-11-18 08:10:00.608406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.703 [2024-11-18 08:10:00.608419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.703 [2024-11-18 08:10:00.608448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.703 qpair failed and we were unable to recover it. 00:36:07.703 [2024-11-18 08:10:00.618319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.703 [2024-11-18 08:10:00.618401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.703 [2024-11-18 08:10:00.618427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.703 [2024-11-18 08:10:00.618441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.703 [2024-11-18 08:10:00.618452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.703 [2024-11-18 08:10:00.618482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.703 qpair failed and we were unable to recover it. 00:36:07.703 [2024-11-18 08:10:00.628369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.704 [2024-11-18 08:10:00.628462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.704 [2024-11-18 08:10:00.628488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.704 [2024-11-18 08:10:00.628510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.704 [2024-11-18 08:10:00.628522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.704 [2024-11-18 08:10:00.628553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.704 qpair failed and we were unable to recover it. 00:36:07.704 [2024-11-18 08:10:00.638375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.704 [2024-11-18 08:10:00.638461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.704 [2024-11-18 08:10:00.638487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.704 [2024-11-18 08:10:00.638510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.704 [2024-11-18 08:10:00.638528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.704 [2024-11-18 08:10:00.638558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.704 qpair failed and we were unable to recover it. 00:36:07.704 [2024-11-18 08:10:00.648503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.704 [2024-11-18 08:10:00.648593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.704 [2024-11-18 08:10:00.648619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.704 [2024-11-18 08:10:00.648632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.704 [2024-11-18 08:10:00.648644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.704 [2024-11-18 08:10:00.648674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.704 qpair failed and we were unable to recover it. 00:36:07.704 [2024-11-18 08:10:00.658406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.704 [2024-11-18 08:10:00.658496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.704 [2024-11-18 08:10:00.658523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.704 [2024-11-18 08:10:00.658538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.704 [2024-11-18 08:10:00.658549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.704 [2024-11-18 08:10:00.658579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.704 qpair failed and we were unable to recover it. 00:36:07.704 [2024-11-18 08:10:00.668498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.704 [2024-11-18 08:10:00.668600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.704 [2024-11-18 08:10:00.668626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.704 [2024-11-18 08:10:00.668640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.704 [2024-11-18 08:10:00.668651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.704 [2024-11-18 08:10:00.668682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.704 qpair failed and we were unable to recover it. 00:36:07.704 [2024-11-18 08:10:00.678485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.704 [2024-11-18 08:10:00.678614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.704 [2024-11-18 08:10:00.678640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.704 [2024-11-18 08:10:00.678653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.704 [2024-11-18 08:10:00.678665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.704 [2024-11-18 08:10:00.678694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.704 qpair failed and we were unable to recover it. 00:36:07.704 [2024-11-18 08:10:00.688525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.704 [2024-11-18 08:10:00.688618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.704 [2024-11-18 08:10:00.688647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.704 [2024-11-18 08:10:00.688662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.704 [2024-11-18 08:10:00.688674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.704 [2024-11-18 08:10:00.688704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.704 qpair failed and we were unable to recover it. 00:36:07.704 [2024-11-18 08:10:00.698541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.704 [2024-11-18 08:10:00.698634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.704 [2024-11-18 08:10:00.698660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.704 [2024-11-18 08:10:00.698674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.704 [2024-11-18 08:10:00.698686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.704 [2024-11-18 08:10:00.698716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.704 qpair failed and we were unable to recover it. 00:36:07.704 [2024-11-18 08:10:00.708673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.704 [2024-11-18 08:10:00.708800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.704 [2024-11-18 08:10:00.708826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.704 [2024-11-18 08:10:00.708841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.704 [2024-11-18 08:10:00.708853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.704 [2024-11-18 08:10:00.708883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.704 qpair failed and we were unable to recover it. 00:36:07.704 [2024-11-18 08:10:00.718579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.704 [2024-11-18 08:10:00.718664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.704 [2024-11-18 08:10:00.718692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.704 [2024-11-18 08:10:00.718707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.704 [2024-11-18 08:10:00.718719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.704 [2024-11-18 08:10:00.718749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.704 qpair failed and we were unable to recover it. 00:36:07.704 [2024-11-18 08:10:00.728616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.704 [2024-11-18 08:10:00.728718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.704 [2024-11-18 08:10:00.728743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.704 [2024-11-18 08:10:00.728757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.704 [2024-11-18 08:10:00.728769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.704 [2024-11-18 08:10:00.728799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.704 qpair failed and we were unable to recover it. 00:36:07.704 [2024-11-18 08:10:00.738643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.704 [2024-11-18 08:10:00.738728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.704 [2024-11-18 08:10:00.738754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.704 [2024-11-18 08:10:00.738768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.704 [2024-11-18 08:10:00.738780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.704 [2024-11-18 08:10:00.738810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.704 qpair failed and we were unable to recover it. 00:36:07.704 [2024-11-18 08:10:00.748697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.704 [2024-11-18 08:10:00.748786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.704 [2024-11-18 08:10:00.748813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.704 [2024-11-18 08:10:00.748827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.704 [2024-11-18 08:10:00.748838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.704 [2024-11-18 08:10:00.748868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.704 qpair failed and we were unable to recover it. 00:36:07.704 [2024-11-18 08:10:00.758722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.705 [2024-11-18 08:10:00.758847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.705 [2024-11-18 08:10:00.758872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.705 [2024-11-18 08:10:00.758886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.705 [2024-11-18 08:10:00.758897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.705 [2024-11-18 08:10:00.758927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.705 qpair failed and we were unable to recover it. 00:36:07.705 [2024-11-18 08:10:00.768747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.705 [2024-11-18 08:10:00.768860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.705 [2024-11-18 08:10:00.768885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.705 [2024-11-18 08:10:00.768905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.705 [2024-11-18 08:10:00.768917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.705 [2024-11-18 08:10:00.768946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.705 qpair failed and we were unable to recover it. 00:36:07.705 [2024-11-18 08:10:00.778778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.705 [2024-11-18 08:10:00.778861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.705 [2024-11-18 08:10:00.778887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.705 [2024-11-18 08:10:00.778901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.705 [2024-11-18 08:10:00.778912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.705 [2024-11-18 08:10:00.778942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.705 qpair failed and we were unable to recover it. 00:36:07.705 [2024-11-18 08:10:00.788800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.705 [2024-11-18 08:10:00.788900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.705 [2024-11-18 08:10:00.788926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.705 [2024-11-18 08:10:00.788940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.705 [2024-11-18 08:10:00.788953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.705 [2024-11-18 08:10:00.788983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.705 qpair failed and we were unable to recover it. 00:36:07.964 [2024-11-18 08:10:00.798865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.964 [2024-11-18 08:10:00.798947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.964 [2024-11-18 08:10:00.798974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.964 [2024-11-18 08:10:00.798989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.964 [2024-11-18 08:10:00.799000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.965 [2024-11-18 08:10:00.799030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.965 qpair failed and we were unable to recover it. 00:36:07.965 [2024-11-18 08:10:00.808856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.965 [2024-11-18 08:10:00.808947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.965 [2024-11-18 08:10:00.808973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.965 [2024-11-18 08:10:00.808987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.965 [2024-11-18 08:10:00.808999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.965 [2024-11-18 08:10:00.809034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.965 qpair failed and we were unable to recover it. 00:36:07.965 [2024-11-18 08:10:00.818906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.965 [2024-11-18 08:10:00.818990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.965 [2024-11-18 08:10:00.819017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.965 [2024-11-18 08:10:00.819031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.965 [2024-11-18 08:10:00.819043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.965 [2024-11-18 08:10:00.819086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.965 qpair failed and we were unable to recover it. 00:36:07.965 [2024-11-18 08:10:00.828894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.965 [2024-11-18 08:10:00.828979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.965 [2024-11-18 08:10:00.829005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.965 [2024-11-18 08:10:00.829019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.965 [2024-11-18 08:10:00.829031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.965 [2024-11-18 08:10:00.829061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.965 qpair failed and we were unable to recover it. 00:36:07.965 [2024-11-18 08:10:00.838957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.965 [2024-11-18 08:10:00.839055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.965 [2024-11-18 08:10:00.839088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.965 [2024-11-18 08:10:00.839110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.965 [2024-11-18 08:10:00.839122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.965 [2024-11-18 08:10:00.839156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.965 qpair failed and we were unable to recover it. 00:36:07.965 [2024-11-18 08:10:00.848978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.965 [2024-11-18 08:10:00.849067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.965 [2024-11-18 08:10:00.849094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.965 [2024-11-18 08:10:00.849109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.965 [2024-11-18 08:10:00.849121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.965 [2024-11-18 08:10:00.849151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.965 qpair failed and we were unable to recover it. 00:36:07.965 [2024-11-18 08:10:00.859031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.965 [2024-11-18 08:10:00.859127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.965 [2024-11-18 08:10:00.859153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.965 [2024-11-18 08:10:00.859168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.965 [2024-11-18 08:10:00.859179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.965 [2024-11-18 08:10:00.859209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.965 qpair failed and we were unable to recover it. 00:36:07.965 [2024-11-18 08:10:00.869061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.965 [2024-11-18 08:10:00.869149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.965 [2024-11-18 08:10:00.869175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.965 [2024-11-18 08:10:00.869189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.965 [2024-11-18 08:10:00.869201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.965 [2024-11-18 08:10:00.869230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.965 qpair failed and we were unable to recover it. 00:36:07.965 [2024-11-18 08:10:00.879153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.965 [2024-11-18 08:10:00.879235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.965 [2024-11-18 08:10:00.879261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.965 [2024-11-18 08:10:00.879275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.965 [2024-11-18 08:10:00.879286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.965 [2024-11-18 08:10:00.879316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.965 qpair failed and we were unable to recover it. 00:36:07.965 [2024-11-18 08:10:00.889104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.965 [2024-11-18 08:10:00.889229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.965 [2024-11-18 08:10:00.889255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.965 [2024-11-18 08:10:00.889269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.965 [2024-11-18 08:10:00.889280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.965 [2024-11-18 08:10:00.889310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.965 qpair failed and we were unable to recover it. 00:36:07.965 [2024-11-18 08:10:00.899119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.965 [2024-11-18 08:10:00.899200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.965 [2024-11-18 08:10:00.899231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.965 [2024-11-18 08:10:00.899246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.965 [2024-11-18 08:10:00.899258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.965 [2024-11-18 08:10:00.899288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.965 qpair failed and we were unable to recover it. 00:36:07.965 [2024-11-18 08:10:00.909257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.965 [2024-11-18 08:10:00.909349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.965 [2024-11-18 08:10:00.909374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.965 [2024-11-18 08:10:00.909388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.965 [2024-11-18 08:10:00.909400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.965 [2024-11-18 08:10:00.909429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.965 qpair failed and we were unable to recover it. 00:36:07.965 [2024-11-18 08:10:00.919203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.965 [2024-11-18 08:10:00.919329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.965 [2024-11-18 08:10:00.919355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.965 [2024-11-18 08:10:00.919369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.965 [2024-11-18 08:10:00.919380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.965 [2024-11-18 08:10:00.919410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.966 qpair failed and we were unable to recover it. 00:36:07.966 [2024-11-18 08:10:00.929220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.966 [2024-11-18 08:10:00.929312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.966 [2024-11-18 08:10:00.929337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.966 [2024-11-18 08:10:00.929351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.966 [2024-11-18 08:10:00.929363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.966 [2024-11-18 08:10:00.929392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.966 qpair failed and we were unable to recover it. 00:36:07.966 [2024-11-18 08:10:00.939265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.966 [2024-11-18 08:10:00.939353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.966 [2024-11-18 08:10:00.939379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.966 [2024-11-18 08:10:00.939392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.966 [2024-11-18 08:10:00.939404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.966 [2024-11-18 08:10:00.939440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.966 qpair failed and we were unable to recover it. 00:36:07.966 [2024-11-18 08:10:00.949296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.966 [2024-11-18 08:10:00.949383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.966 [2024-11-18 08:10:00.949409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.966 [2024-11-18 08:10:00.949423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.966 [2024-11-18 08:10:00.949435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.966 [2024-11-18 08:10:00.949464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.966 qpair failed and we were unable to recover it. 00:36:07.966 [2024-11-18 08:10:00.959320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.966 [2024-11-18 08:10:00.959406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.966 [2024-11-18 08:10:00.959432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.966 [2024-11-18 08:10:00.959446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.966 [2024-11-18 08:10:00.959458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.966 [2024-11-18 08:10:00.959487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.966 qpair failed and we were unable to recover it. 00:36:07.966 [2024-11-18 08:10:00.969385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.966 [2024-11-18 08:10:00.969510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.966 [2024-11-18 08:10:00.969536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.966 [2024-11-18 08:10:00.969550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.966 [2024-11-18 08:10:00.969562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.966 [2024-11-18 08:10:00.969592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.966 qpair failed and we were unable to recover it. 00:36:07.966 [2024-11-18 08:10:00.979409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.966 [2024-11-18 08:10:00.979514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.966 [2024-11-18 08:10:00.979540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.966 [2024-11-18 08:10:00.979554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.966 [2024-11-18 08:10:00.979566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.966 [2024-11-18 08:10:00.979597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.966 qpair failed and we were unable to recover it. 00:36:07.966 [2024-11-18 08:10:00.989426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.966 [2024-11-18 08:10:00.989516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.966 [2024-11-18 08:10:00.989545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.966 [2024-11-18 08:10:00.989560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.966 [2024-11-18 08:10:00.989571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.966 [2024-11-18 08:10:00.989601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.966 qpair failed and we were unable to recover it. 00:36:07.966 [2024-11-18 08:10:00.999487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.966 [2024-11-18 08:10:00.999579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.966 [2024-11-18 08:10:00.999606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.966 [2024-11-18 08:10:00.999620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.966 [2024-11-18 08:10:00.999632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.966 [2024-11-18 08:10:00.999664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.966 qpair failed and we were unable to recover it. 00:36:07.966 [2024-11-18 08:10:01.009546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.966 [2024-11-18 08:10:01.009642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.966 [2024-11-18 08:10:01.009668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.966 [2024-11-18 08:10:01.009683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.966 [2024-11-18 08:10:01.009695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.966 [2024-11-18 08:10:01.009725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.966 qpair failed and we were unable to recover it. 00:36:07.966 [2024-11-18 08:10:01.019541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.966 [2024-11-18 08:10:01.019633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.966 [2024-11-18 08:10:01.019659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.966 [2024-11-18 08:10:01.019673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.966 [2024-11-18 08:10:01.019684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.966 [2024-11-18 08:10:01.019714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.966 qpair failed and we were unable to recover it. 00:36:07.966 [2024-11-18 08:10:01.029535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.966 [2024-11-18 08:10:01.029638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.966 [2024-11-18 08:10:01.029669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.966 [2024-11-18 08:10:01.029684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.966 [2024-11-18 08:10:01.029695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.966 [2024-11-18 08:10:01.029725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.966 qpair failed and we were unable to recover it. 00:36:07.966 [2024-11-18 08:10:01.039564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.966 [2024-11-18 08:10:01.039653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.966 [2024-11-18 08:10:01.039678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.966 [2024-11-18 08:10:01.039692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.966 [2024-11-18 08:10:01.039704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.966 [2024-11-18 08:10:01.039733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.966 qpair failed and we were unable to recover it. 00:36:07.966 [2024-11-18 08:10:01.049625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.966 [2024-11-18 08:10:01.049718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.966 [2024-11-18 08:10:01.049744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.966 [2024-11-18 08:10:01.049758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.967 [2024-11-18 08:10:01.049771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:07.967 [2024-11-18 08:10:01.049801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.967 qpair failed and we were unable to recover it. 00:36:08.226 [2024-11-18 08:10:01.059646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.226 [2024-11-18 08:10:01.059736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.226 [2024-11-18 08:10:01.059764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.226 [2024-11-18 08:10:01.059778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.226 [2024-11-18 08:10:01.059790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.226 [2024-11-18 08:10:01.059821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-11-18 08:10:01.069762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.226 [2024-11-18 08:10:01.069855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.226 [2024-11-18 08:10:01.069886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.226 [2024-11-18 08:10:01.069902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.226 [2024-11-18 08:10:01.069920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.226 [2024-11-18 08:10:01.069952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-11-18 08:10:01.079704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.226 [2024-11-18 08:10:01.079821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.226 [2024-11-18 08:10:01.079848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.226 [2024-11-18 08:10:01.079862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.226 [2024-11-18 08:10:01.079874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.226 [2024-11-18 08:10:01.079904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-11-18 08:10:01.089833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.226 [2024-11-18 08:10:01.089930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.226 [2024-11-18 08:10:01.089964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.226 [2024-11-18 08:10:01.089982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.226 [2024-11-18 08:10:01.089995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.226 [2024-11-18 08:10:01.090026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-11-18 08:10:01.099746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.226 [2024-11-18 08:10:01.099833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.226 [2024-11-18 08:10:01.099860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.226 [2024-11-18 08:10:01.099874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.226 [2024-11-18 08:10:01.099886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.226 [2024-11-18 08:10:01.099916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-11-18 08:10:01.109775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.226 [2024-11-18 08:10:01.109865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.226 [2024-11-18 08:10:01.109891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.226 [2024-11-18 08:10:01.109906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.226 [2024-11-18 08:10:01.109917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.227 [2024-11-18 08:10:01.109948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-11-18 08:10:01.119791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.227 [2024-11-18 08:10:01.119874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.227 [2024-11-18 08:10:01.119901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.227 [2024-11-18 08:10:01.119915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.227 [2024-11-18 08:10:01.119927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.227 [2024-11-18 08:10:01.119957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-11-18 08:10:01.129851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.227 [2024-11-18 08:10:01.129946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.227 [2024-11-18 08:10:01.129972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.227 [2024-11-18 08:10:01.129986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.227 [2024-11-18 08:10:01.129997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.227 [2024-11-18 08:10:01.130027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-11-18 08:10:01.139849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.227 [2024-11-18 08:10:01.139981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.227 [2024-11-18 08:10:01.140006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.227 [2024-11-18 08:10:01.140020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.227 [2024-11-18 08:10:01.140032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.227 [2024-11-18 08:10:01.140062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-11-18 08:10:01.149881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.227 [2024-11-18 08:10:01.149967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.227 [2024-11-18 08:10:01.149994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.227 [2024-11-18 08:10:01.150008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.227 [2024-11-18 08:10:01.150020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.227 [2024-11-18 08:10:01.150050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-11-18 08:10:01.159929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.227 [2024-11-18 08:10:01.160006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.227 [2024-11-18 08:10:01.160038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.227 [2024-11-18 08:10:01.160052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.227 [2024-11-18 08:10:01.160064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.227 [2024-11-18 08:10:01.160094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-11-18 08:10:01.169942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.227 [2024-11-18 08:10:01.170071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.227 [2024-11-18 08:10:01.170097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.227 [2024-11-18 08:10:01.170111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.227 [2024-11-18 08:10:01.170123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.227 [2024-11-18 08:10:01.170153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-11-18 08:10:01.179999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.227 [2024-11-18 08:10:01.180086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.227 [2024-11-18 08:10:01.180112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.227 [2024-11-18 08:10:01.180126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.227 [2024-11-18 08:10:01.180138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.227 [2024-11-18 08:10:01.180168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-11-18 08:10:01.189998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.227 [2024-11-18 08:10:01.190113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.227 [2024-11-18 08:10:01.190138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.227 [2024-11-18 08:10:01.190152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.227 [2024-11-18 08:10:01.190163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.227 [2024-11-18 08:10:01.190193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-11-18 08:10:01.199995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.227 [2024-11-18 08:10:01.200077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.227 [2024-11-18 08:10:01.200103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.227 [2024-11-18 08:10:01.200122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.227 [2024-11-18 08:10:01.200135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.227 [2024-11-18 08:10:01.200165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-11-18 08:10:01.210053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.227 [2024-11-18 08:10:01.210147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.227 [2024-11-18 08:10:01.210172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.227 [2024-11-18 08:10:01.210186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.227 [2024-11-18 08:10:01.210197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.227 [2024-11-18 08:10:01.210227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-11-18 08:10:01.220105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.228 [2024-11-18 08:10:01.220232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.228 [2024-11-18 08:10:01.220258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.228 [2024-11-18 08:10:01.220272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.228 [2024-11-18 08:10:01.220283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.228 [2024-11-18 08:10:01.220313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-11-18 08:10:01.230113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.228 [2024-11-18 08:10:01.230199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.228 [2024-11-18 08:10:01.230225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.228 [2024-11-18 08:10:01.230239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.228 [2024-11-18 08:10:01.230251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.228 [2024-11-18 08:10:01.230280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-11-18 08:10:01.240151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.228 [2024-11-18 08:10:01.240234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.228 [2024-11-18 08:10:01.240259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.228 [2024-11-18 08:10:01.240273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.228 [2024-11-18 08:10:01.240285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.228 [2024-11-18 08:10:01.240314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-11-18 08:10:01.250170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.228 [2024-11-18 08:10:01.250258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.228 [2024-11-18 08:10:01.250283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.228 [2024-11-18 08:10:01.250297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.228 [2024-11-18 08:10:01.250309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.228 [2024-11-18 08:10:01.250338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-11-18 08:10:01.260200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.228 [2024-11-18 08:10:01.260289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.228 [2024-11-18 08:10:01.260314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.228 [2024-11-18 08:10:01.260328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.228 [2024-11-18 08:10:01.260340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.228 [2024-11-18 08:10:01.260369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-11-18 08:10:01.270221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.228 [2024-11-18 08:10:01.270308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.228 [2024-11-18 08:10:01.270334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.228 [2024-11-18 08:10:01.270347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.228 [2024-11-18 08:10:01.270359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.228 [2024-11-18 08:10:01.270389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-11-18 08:10:01.280309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.228 [2024-11-18 08:10:01.280388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.228 [2024-11-18 08:10:01.280414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.228 [2024-11-18 08:10:01.280428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.228 [2024-11-18 08:10:01.280440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.228 [2024-11-18 08:10:01.280470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-11-18 08:10:01.290356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.228 [2024-11-18 08:10:01.290448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.228 [2024-11-18 08:10:01.290475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.228 [2024-11-18 08:10:01.290488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.228 [2024-11-18 08:10:01.290512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.228 [2024-11-18 08:10:01.290542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-11-18 08:10:01.300316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.228 [2024-11-18 08:10:01.300404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.228 [2024-11-18 08:10:01.300434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.228 [2024-11-18 08:10:01.300450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.228 [2024-11-18 08:10:01.300462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.228 [2024-11-18 08:10:01.300502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-11-18 08:10:01.310412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.228 [2024-11-18 08:10:01.310515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.228 [2024-11-18 08:10:01.310544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.228 [2024-11-18 08:10:01.310559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.228 [2024-11-18 08:10:01.310571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.228 [2024-11-18 08:10:01.310605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.487 [2024-11-18 08:10:01.320460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.487 [2024-11-18 08:10:01.320554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.487 [2024-11-18 08:10:01.320582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.487 [2024-11-18 08:10:01.320597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.487 [2024-11-18 08:10:01.320609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.487 [2024-11-18 08:10:01.320653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.487 qpair failed and we were unable to recover it. 00:36:08.487 [2024-11-18 08:10:01.330534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.487 [2024-11-18 08:10:01.330631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.487 [2024-11-18 08:10:01.330661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.487 [2024-11-18 08:10:01.330686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.487 [2024-11-18 08:10:01.330700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.487 [2024-11-18 08:10:01.330731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.487 qpair failed and we were unable to recover it. 00:36:08.487 [2024-11-18 08:10:01.340435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.487 [2024-11-18 08:10:01.340548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.487 [2024-11-18 08:10:01.340579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.487 [2024-11-18 08:10:01.340598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.487 [2024-11-18 08:10:01.340611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.487 [2024-11-18 08:10:01.340643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.487 qpair failed and we were unable to recover it. 00:36:08.487 [2024-11-18 08:10:01.350443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.487 [2024-11-18 08:10:01.350540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.487 [2024-11-18 08:10:01.350568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.487 [2024-11-18 08:10:01.350582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.487 [2024-11-18 08:10:01.350594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.487 [2024-11-18 08:10:01.350624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.487 qpair failed and we were unable to recover it. 00:36:08.487 [2024-11-18 08:10:01.360466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.487 [2024-11-18 08:10:01.360559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.487 [2024-11-18 08:10:01.360586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.487 [2024-11-18 08:10:01.360600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.487 [2024-11-18 08:10:01.360612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.487 [2024-11-18 08:10:01.360643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.487 qpair failed and we were unable to recover it. 00:36:08.487 [2024-11-18 08:10:01.370561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.487 [2024-11-18 08:10:01.370651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.487 [2024-11-18 08:10:01.370677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.487 [2024-11-18 08:10:01.370691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.487 [2024-11-18 08:10:01.370703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.487 [2024-11-18 08:10:01.370740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.487 qpair failed and we were unable to recover it. 00:36:08.487 [2024-11-18 08:10:01.380570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.487 [2024-11-18 08:10:01.380690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.487 [2024-11-18 08:10:01.380716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.487 [2024-11-18 08:10:01.380730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.487 [2024-11-18 08:10:01.380742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.487 [2024-11-18 08:10:01.380772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.487 qpair failed and we were unable to recover it. 00:36:08.487 [2024-11-18 08:10:01.390553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.487 [2024-11-18 08:10:01.390644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.487 [2024-11-18 08:10:01.390668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.487 [2024-11-18 08:10:01.390681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.487 [2024-11-18 08:10:01.390693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.487 [2024-11-18 08:10:01.390722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.487 qpair failed and we were unable to recover it. 00:36:08.487 [2024-11-18 08:10:01.400581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.487 [2024-11-18 08:10:01.400666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.487 [2024-11-18 08:10:01.400692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.487 [2024-11-18 08:10:01.400706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.487 [2024-11-18 08:10:01.400718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.487 [2024-11-18 08:10:01.400747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.487 qpair failed and we were unable to recover it. 00:36:08.487 [2024-11-18 08:10:01.410648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.487 [2024-11-18 08:10:01.410741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.487 [2024-11-18 08:10:01.410770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.487 [2024-11-18 08:10:01.410786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.487 [2024-11-18 08:10:01.410798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.487 [2024-11-18 08:10:01.410828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.487 qpair failed and we were unable to recover it. 00:36:08.487 [2024-11-18 08:10:01.420639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.487 [2024-11-18 08:10:01.420727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.487 [2024-11-18 08:10:01.420753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.487 [2024-11-18 08:10:01.420767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.487 [2024-11-18 08:10:01.420779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.488 [2024-11-18 08:10:01.420809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.488 qpair failed and we were unable to recover it. 00:36:08.488 [2024-11-18 08:10:01.430737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.488 [2024-11-18 08:10:01.430842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.488 [2024-11-18 08:10:01.430868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.488 [2024-11-18 08:10:01.430882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.488 [2024-11-18 08:10:01.430893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.488 [2024-11-18 08:10:01.430923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.488 qpair failed and we were unable to recover it. 00:36:08.488 [2024-11-18 08:10:01.440708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.488 [2024-11-18 08:10:01.440793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.488 [2024-11-18 08:10:01.440819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.488 [2024-11-18 08:10:01.440833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.488 [2024-11-18 08:10:01.440844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.488 [2024-11-18 08:10:01.440874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.488 qpair failed and we were unable to recover it. 00:36:08.488 [2024-11-18 08:10:01.450772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.488 [2024-11-18 08:10:01.450864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.488 [2024-11-18 08:10:01.450890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.488 [2024-11-18 08:10:01.450904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.488 [2024-11-18 08:10:01.450915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.488 [2024-11-18 08:10:01.450945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.488 qpair failed and we were unable to recover it. 00:36:08.488 [2024-11-18 08:10:01.460914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.488 [2024-11-18 08:10:01.461049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.488 [2024-11-18 08:10:01.461080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.488 [2024-11-18 08:10:01.461095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.488 [2024-11-18 08:10:01.461108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.488 [2024-11-18 08:10:01.461138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.488 qpair failed and we were unable to recover it. 00:36:08.488 [2024-11-18 08:10:01.470819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.488 [2024-11-18 08:10:01.470908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.488 [2024-11-18 08:10:01.470934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.488 [2024-11-18 08:10:01.470948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.488 [2024-11-18 08:10:01.470959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.488 [2024-11-18 08:10:01.471003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.488 qpair failed and we were unable to recover it. 00:36:08.488 [2024-11-18 08:10:01.480826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.488 [2024-11-18 08:10:01.480909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.488 [2024-11-18 08:10:01.480935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.488 [2024-11-18 08:10:01.480949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.488 [2024-11-18 08:10:01.480961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.488 [2024-11-18 08:10:01.480991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.488 qpair failed and we were unable to recover it. 00:36:08.488 [2024-11-18 08:10:01.490833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.488 [2024-11-18 08:10:01.490964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.488 [2024-11-18 08:10:01.490990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.488 [2024-11-18 08:10:01.491004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.488 [2024-11-18 08:10:01.491015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.488 [2024-11-18 08:10:01.491045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.488 qpair failed and we were unable to recover it. 00:36:08.488 [2024-11-18 08:10:01.500883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.488 [2024-11-18 08:10:01.500974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.488 [2024-11-18 08:10:01.501000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.488 [2024-11-18 08:10:01.501014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.488 [2024-11-18 08:10:01.501031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.488 [2024-11-18 08:10:01.501062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.488 qpair failed and we were unable to recover it. 00:36:08.488 [2024-11-18 08:10:01.510989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.488 [2024-11-18 08:10:01.511081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.488 [2024-11-18 08:10:01.511107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.488 [2024-11-18 08:10:01.511121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.488 [2024-11-18 08:10:01.511133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.488 [2024-11-18 08:10:01.511184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.488 qpair failed and we were unable to recover it. 00:36:08.488 [2024-11-18 08:10:01.521002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.488 [2024-11-18 08:10:01.521089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.488 [2024-11-18 08:10:01.521116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.488 [2024-11-18 08:10:01.521129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.488 [2024-11-18 08:10:01.521141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.488 [2024-11-18 08:10:01.521170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.488 qpair failed and we were unable to recover it. 00:36:08.488 [2024-11-18 08:10:01.531000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.488 [2024-11-18 08:10:01.531089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.488 [2024-11-18 08:10:01.531115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.488 [2024-11-18 08:10:01.531129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.488 [2024-11-18 08:10:01.531140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.488 [2024-11-18 08:10:01.531170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.488 qpair failed and we were unable to recover it. 00:36:08.488 [2024-11-18 08:10:01.541085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.488 [2024-11-18 08:10:01.541173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.488 [2024-11-18 08:10:01.541199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.488 [2024-11-18 08:10:01.541213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.488 [2024-11-18 08:10:01.541225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.488 [2024-11-18 08:10:01.541255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.488 qpair failed and we were unable to recover it. 00:36:08.488 [2024-11-18 08:10:01.551039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.488 [2024-11-18 08:10:01.551125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.488 [2024-11-18 08:10:01.551150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.488 [2024-11-18 08:10:01.551165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.488 [2024-11-18 08:10:01.551177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.488 [2024-11-18 08:10:01.551206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.488 qpair failed and we were unable to recover it. 00:36:08.488 [2024-11-18 08:10:01.561114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.488 [2024-11-18 08:10:01.561201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.488 [2024-11-18 08:10:01.561227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.488 [2024-11-18 08:10:01.561241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.488 [2024-11-18 08:10:01.561253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.488 [2024-11-18 08:10:01.561282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.488 qpair failed and we were unable to recover it. 00:36:08.488 [2024-11-18 08:10:01.571083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.488 [2024-11-18 08:10:01.571176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.488 [2024-11-18 08:10:01.571202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.488 [2024-11-18 08:10:01.571216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.488 [2024-11-18 08:10:01.571228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.488 [2024-11-18 08:10:01.571259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.488 qpair failed and we were unable to recover it. 00:36:08.747 [2024-11-18 08:10:01.581120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.747 [2024-11-18 08:10:01.581212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.747 [2024-11-18 08:10:01.581240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.747 [2024-11-18 08:10:01.581254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.747 [2024-11-18 08:10:01.581265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.747 [2024-11-18 08:10:01.581296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.747 qpair failed and we were unable to recover it. 00:36:08.747 [2024-11-18 08:10:01.591159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.747 [2024-11-18 08:10:01.591275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.747 [2024-11-18 08:10:01.591311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.747 [2024-11-18 08:10:01.591327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.747 [2024-11-18 08:10:01.591339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.747 [2024-11-18 08:10:01.591371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.747 qpair failed and we were unable to recover it. 00:36:08.747 [2024-11-18 08:10:01.601192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.747 [2024-11-18 08:10:01.601276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.747 [2024-11-18 08:10:01.601303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.747 [2024-11-18 08:10:01.601317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.747 [2024-11-18 08:10:01.601329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.747 [2024-11-18 08:10:01.601360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.747 qpair failed and we were unable to recover it. 00:36:08.747 [2024-11-18 08:10:01.611208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.747 [2024-11-18 08:10:01.611311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.747 [2024-11-18 08:10:01.611338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.747 [2024-11-18 08:10:01.611352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.747 [2024-11-18 08:10:01.611364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.747 [2024-11-18 08:10:01.611394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.747 qpair failed and we were unable to recover it. 00:36:08.747 [2024-11-18 08:10:01.621255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.747 [2024-11-18 08:10:01.621340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.747 [2024-11-18 08:10:01.621366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.747 [2024-11-18 08:10:01.621380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.747 [2024-11-18 08:10:01.621392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.747 [2024-11-18 08:10:01.621422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.747 qpair failed and we were unable to recover it. 00:36:08.747 [2024-11-18 08:10:01.631323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.747 [2024-11-18 08:10:01.631409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.747 [2024-11-18 08:10:01.631438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.747 [2024-11-18 08:10:01.631454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.747 [2024-11-18 08:10:01.631471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.747 [2024-11-18 08:10:01.631510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.747 qpair failed and we were unable to recover it. 00:36:08.747 [2024-11-18 08:10:01.641285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.747 [2024-11-18 08:10:01.641384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.747 [2024-11-18 08:10:01.641410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.747 [2024-11-18 08:10:01.641424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.747 [2024-11-18 08:10:01.641436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.747 [2024-11-18 08:10:01.641466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.747 qpair failed and we were unable to recover it. 00:36:08.747 [2024-11-18 08:10:01.651420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.747 [2024-11-18 08:10:01.651552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.747 [2024-11-18 08:10:01.651579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.747 [2024-11-18 08:10:01.651592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.747 [2024-11-18 08:10:01.651604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.747 [2024-11-18 08:10:01.651633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.747 qpair failed and we were unable to recover it. 00:36:08.747 [2024-11-18 08:10:01.661325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.747 [2024-11-18 08:10:01.661404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.747 [2024-11-18 08:10:01.661430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.747 [2024-11-18 08:10:01.661444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.747 [2024-11-18 08:10:01.661455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.747 [2024-11-18 08:10:01.661485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.747 qpair failed and we were unable to recover it. 00:36:08.747 [2024-11-18 08:10:01.671375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.747 [2024-11-18 08:10:01.671461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.747 [2024-11-18 08:10:01.671486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.747 [2024-11-18 08:10:01.671515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.747 [2024-11-18 08:10:01.671528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.748 [2024-11-18 08:10:01.671558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.748 qpair failed and we were unable to recover it. 00:36:08.748 [2024-11-18 08:10:01.681387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.748 [2024-11-18 08:10:01.681478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.748 [2024-11-18 08:10:01.681511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.748 [2024-11-18 08:10:01.681527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.748 [2024-11-18 08:10:01.681538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.748 [2024-11-18 08:10:01.681568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.748 qpair failed and we were unable to recover it. 00:36:08.748 [2024-11-18 08:10:01.691452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.748 [2024-11-18 08:10:01.691555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.748 [2024-11-18 08:10:01.691581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.748 [2024-11-18 08:10:01.691595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.748 [2024-11-18 08:10:01.691607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.748 [2024-11-18 08:10:01.691637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.748 qpair failed and we were unable to recover it. 00:36:08.748 [2024-11-18 08:10:01.701442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.748 [2024-11-18 08:10:01.701537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.748 [2024-11-18 08:10:01.701564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.748 [2024-11-18 08:10:01.701578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.748 [2024-11-18 08:10:01.701589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.748 [2024-11-18 08:10:01.701620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.748 qpair failed and we were unable to recover it. 00:36:08.748 [2024-11-18 08:10:01.711485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.748 [2024-11-18 08:10:01.711598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.748 [2024-11-18 08:10:01.711627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.748 [2024-11-18 08:10:01.711642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.748 [2024-11-18 08:10:01.711654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.748 [2024-11-18 08:10:01.711684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.748 qpair failed and we were unable to recover it. 00:36:08.748 [2024-11-18 08:10:01.721604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.748 [2024-11-18 08:10:01.721722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.748 [2024-11-18 08:10:01.721754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.748 [2024-11-18 08:10:01.721770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.748 [2024-11-18 08:10:01.721781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.748 [2024-11-18 08:10:01.721839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.748 qpair failed and we were unable to recover it. 00:36:08.748 [2024-11-18 08:10:01.731566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.748 [2024-11-18 08:10:01.731662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.748 [2024-11-18 08:10:01.731688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.748 [2024-11-18 08:10:01.731702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.748 [2024-11-18 08:10:01.731714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.748 [2024-11-18 08:10:01.731744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.748 qpair failed and we were unable to recover it. 00:36:08.748 [2024-11-18 08:10:01.741562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.748 [2024-11-18 08:10:01.741652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.748 [2024-11-18 08:10:01.741678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.748 [2024-11-18 08:10:01.741692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.748 [2024-11-18 08:10:01.741704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.748 [2024-11-18 08:10:01.741734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.748 qpair failed and we were unable to recover it. 00:36:08.748 [2024-11-18 08:10:01.751590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.748 [2024-11-18 08:10:01.751719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.748 [2024-11-18 08:10:01.751745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.748 [2024-11-18 08:10:01.751759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.748 [2024-11-18 08:10:01.751771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.748 [2024-11-18 08:10:01.751800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.748 qpair failed and we were unable to recover it. 00:36:08.748 [2024-11-18 08:10:01.761624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.748 [2024-11-18 08:10:01.761705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.748 [2024-11-18 08:10:01.761731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.748 [2024-11-18 08:10:01.761751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.748 [2024-11-18 08:10:01.761763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.748 [2024-11-18 08:10:01.761793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.748 qpair failed and we were unable to recover it. 00:36:08.748 [2024-11-18 08:10:01.771760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.748 [2024-11-18 08:10:01.771853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.748 [2024-11-18 08:10:01.771879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.748 [2024-11-18 08:10:01.771893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.748 [2024-11-18 08:10:01.771904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.748 [2024-11-18 08:10:01.771935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.748 qpair failed and we were unable to recover it. 00:36:08.748 [2024-11-18 08:10:01.781817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.748 [2024-11-18 08:10:01.781905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.748 [2024-11-18 08:10:01.781931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.748 [2024-11-18 08:10:01.781945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.748 [2024-11-18 08:10:01.781956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.748 [2024-11-18 08:10:01.781986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.748 qpair failed and we were unable to recover it. 00:36:08.748 [2024-11-18 08:10:01.791799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.748 [2024-11-18 08:10:01.791883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.748 [2024-11-18 08:10:01.791909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.748 [2024-11-18 08:10:01.791923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.748 [2024-11-18 08:10:01.791935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.748 [2024-11-18 08:10:01.791978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.748 qpair failed and we were unable to recover it. 00:36:08.748 [2024-11-18 08:10:01.801732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.748 [2024-11-18 08:10:01.801818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.748 [2024-11-18 08:10:01.801844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.748 [2024-11-18 08:10:01.801858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.748 [2024-11-18 08:10:01.801870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.749 [2024-11-18 08:10:01.801900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.749 qpair failed and we were unable to recover it. 00:36:08.749 [2024-11-18 08:10:01.811807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.749 [2024-11-18 08:10:01.811896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.749 [2024-11-18 08:10:01.811921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.749 [2024-11-18 08:10:01.811935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.749 [2024-11-18 08:10:01.811946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.749 [2024-11-18 08:10:01.811976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.749 qpair failed and we were unable to recover it. 00:36:08.749 [2024-11-18 08:10:01.821819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.749 [2024-11-18 08:10:01.821905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.749 [2024-11-18 08:10:01.821930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.749 [2024-11-18 08:10:01.821944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.749 [2024-11-18 08:10:01.821956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.749 [2024-11-18 08:10:01.821985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.749 qpair failed and we were unable to recover it. 00:36:08.749 [2024-11-18 08:10:01.831857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.749 [2024-11-18 08:10:01.831942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.749 [2024-11-18 08:10:01.831970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.749 [2024-11-18 08:10:01.831984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.749 [2024-11-18 08:10:01.831996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:08.749 [2024-11-18 08:10:01.832027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.749 qpair failed and we were unable to recover it. 00:36:09.008 [2024-11-18 08:10:01.841894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.008 [2024-11-18 08:10:01.841995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.008 [2024-11-18 08:10:01.842028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.008 [2024-11-18 08:10:01.842045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.008 [2024-11-18 08:10:01.842057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.008 [2024-11-18 08:10:01.842093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.008 qpair failed and we were unable to recover it. 00:36:09.008 [2024-11-18 08:10:01.851905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.008 [2024-11-18 08:10:01.852041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.008 [2024-11-18 08:10:01.852068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.008 [2024-11-18 08:10:01.852082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.008 [2024-11-18 08:10:01.852094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.008 [2024-11-18 08:10:01.852125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.008 qpair failed and we were unable to recover it. 00:36:09.008 [2024-11-18 08:10:01.861947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.008 [2024-11-18 08:10:01.862033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.008 [2024-11-18 08:10:01.862060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.008 [2024-11-18 08:10:01.862074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.008 [2024-11-18 08:10:01.862085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.008 [2024-11-18 08:10:01.862116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.008 qpair failed and we were unable to recover it. 00:36:09.008 [2024-11-18 08:10:01.871998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.008 [2024-11-18 08:10:01.872111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.008 [2024-11-18 08:10:01.872137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.008 [2024-11-18 08:10:01.872151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.008 [2024-11-18 08:10:01.872163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.008 [2024-11-18 08:10:01.872193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.008 qpair failed and we were unable to recover it. 00:36:09.008 [2024-11-18 08:10:01.881993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.008 [2024-11-18 08:10:01.882076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.008 [2024-11-18 08:10:01.882103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.008 [2024-11-18 08:10:01.882118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.008 [2024-11-18 08:10:01.882129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.008 [2024-11-18 08:10:01.882172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.008 qpair failed and we were unable to recover it. 00:36:09.008 [2024-11-18 08:10:01.891997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.008 [2024-11-18 08:10:01.892130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.008 [2024-11-18 08:10:01.892156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.008 [2024-11-18 08:10:01.892181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.008 [2024-11-18 08:10:01.892194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.008 [2024-11-18 08:10:01.892224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.008 qpair failed and we were unable to recover it. 00:36:09.008 [2024-11-18 08:10:01.902022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.008 [2024-11-18 08:10:01.902130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.008 [2024-11-18 08:10:01.902156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.008 [2024-11-18 08:10:01.902170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.008 [2024-11-18 08:10:01.902182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.008 [2024-11-18 08:10:01.902212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.008 qpair failed and we were unable to recover it. 00:36:09.008 [2024-11-18 08:10:01.912083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.008 [2024-11-18 08:10:01.912202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.008 [2024-11-18 08:10:01.912228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.008 [2024-11-18 08:10:01.912242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.008 [2024-11-18 08:10:01.912254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.008 [2024-11-18 08:10:01.912284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.008 qpair failed and we were unable to recover it. 00:36:09.008 [2024-11-18 08:10:01.922081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.008 [2024-11-18 08:10:01.922159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.008 [2024-11-18 08:10:01.922186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.008 [2024-11-18 08:10:01.922200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.008 [2024-11-18 08:10:01.922212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.009 [2024-11-18 08:10:01.922242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.009 qpair failed and we were unable to recover it. 00:36:09.009 [2024-11-18 08:10:01.932158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.009 [2024-11-18 08:10:01.932244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.009 [2024-11-18 08:10:01.932270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.009 [2024-11-18 08:10:01.932284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.009 [2024-11-18 08:10:01.932295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.009 [2024-11-18 08:10:01.932331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.009 qpair failed and we were unable to recover it. 00:36:09.009 [2024-11-18 08:10:01.942149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.009 [2024-11-18 08:10:01.942242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.009 [2024-11-18 08:10:01.942268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.009 [2024-11-18 08:10:01.942282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.009 [2024-11-18 08:10:01.942294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.009 [2024-11-18 08:10:01.942324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.009 qpair failed and we were unable to recover it. 00:36:09.009 [2024-11-18 08:10:01.952218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.009 [2024-11-18 08:10:01.952339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.009 [2024-11-18 08:10:01.952364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.009 [2024-11-18 08:10:01.952378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.009 [2024-11-18 08:10:01.952390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.009 [2024-11-18 08:10:01.952420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.009 qpair failed and we were unable to recover it. 00:36:09.009 [2024-11-18 08:10:01.962327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.009 [2024-11-18 08:10:01.962407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.009 [2024-11-18 08:10:01.962434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.009 [2024-11-18 08:10:01.962448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.009 [2024-11-18 08:10:01.962461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.009 [2024-11-18 08:10:01.962511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.009 qpair failed and we were unable to recover it. 00:36:09.009 [2024-11-18 08:10:01.972246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.009 [2024-11-18 08:10:01.972366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.009 [2024-11-18 08:10:01.972392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.009 [2024-11-18 08:10:01.972406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.009 [2024-11-18 08:10:01.972418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.009 [2024-11-18 08:10:01.972448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.009 qpair failed and we were unable to recover it. 00:36:09.009 [2024-11-18 08:10:01.982262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.009 [2024-11-18 08:10:01.982343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.009 [2024-11-18 08:10:01.982369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.009 [2024-11-18 08:10:01.982382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.009 [2024-11-18 08:10:01.982394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.009 [2024-11-18 08:10:01.982424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.009 qpair failed and we were unable to recover it. 00:36:09.009 [2024-11-18 08:10:01.992279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.009 [2024-11-18 08:10:01.992373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.009 [2024-11-18 08:10:01.992398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.009 [2024-11-18 08:10:01.992411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.009 [2024-11-18 08:10:01.992423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.009 [2024-11-18 08:10:01.992452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.009 qpair failed and we were unable to recover it. 00:36:09.009 [2024-11-18 08:10:02.002342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.009 [2024-11-18 08:10:02.002424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.009 [2024-11-18 08:10:02.002450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.009 [2024-11-18 08:10:02.002464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.009 [2024-11-18 08:10:02.002476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.009 [2024-11-18 08:10:02.002512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.009 qpair failed and we were unable to recover it. 00:36:09.009 [2024-11-18 08:10:02.012398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.009 [2024-11-18 08:10:02.012519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.009 [2024-11-18 08:10:02.012546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.009 [2024-11-18 08:10:02.012559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.009 [2024-11-18 08:10:02.012571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.009 [2024-11-18 08:10:02.012601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.009 qpair failed and we were unable to recover it. 00:36:09.009 [2024-11-18 08:10:02.022423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.009 [2024-11-18 08:10:02.022516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.009 [2024-11-18 08:10:02.022551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.009 [2024-11-18 08:10:02.022566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.009 [2024-11-18 08:10:02.022578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.009 [2024-11-18 08:10:02.022608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.009 qpair failed and we were unable to recover it. 00:36:09.009 [2024-11-18 08:10:02.032515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.009 [2024-11-18 08:10:02.032607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.009 [2024-11-18 08:10:02.032634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.009 [2024-11-18 08:10:02.032648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.009 [2024-11-18 08:10:02.032659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.009 [2024-11-18 08:10:02.032703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.009 qpair failed and we were unable to recover it. 00:36:09.009 [2024-11-18 08:10:02.042440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.009 [2024-11-18 08:10:02.042533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.009 [2024-11-18 08:10:02.042559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.009 [2024-11-18 08:10:02.042573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.009 [2024-11-18 08:10:02.042585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.009 [2024-11-18 08:10:02.042615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.009 qpair failed and we were unable to recover it. 00:36:09.009 [2024-11-18 08:10:02.052526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.009 [2024-11-18 08:10:02.052650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.009 [2024-11-18 08:10:02.052675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.009 [2024-11-18 08:10:02.052689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.009 [2024-11-18 08:10:02.052701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.010 [2024-11-18 08:10:02.052731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.010 qpair failed and we were unable to recover it. 00:36:09.010 [2024-11-18 08:10:02.062600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.010 [2024-11-18 08:10:02.062689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.010 [2024-11-18 08:10:02.062714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.010 [2024-11-18 08:10:02.062728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.010 [2024-11-18 08:10:02.062746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.010 [2024-11-18 08:10:02.062777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.010 qpair failed and we were unable to recover it. 00:36:09.010 [2024-11-18 08:10:02.072533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.010 [2024-11-18 08:10:02.072652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.010 [2024-11-18 08:10:02.072678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.010 [2024-11-18 08:10:02.072692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.010 [2024-11-18 08:10:02.072704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.010 [2024-11-18 08:10:02.072734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.010 qpair failed and we were unable to recover it. 00:36:09.010 [2024-11-18 08:10:02.082632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.010 [2024-11-18 08:10:02.082724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.010 [2024-11-18 08:10:02.082750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.010 [2024-11-18 08:10:02.082764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.010 [2024-11-18 08:10:02.082775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.010 [2024-11-18 08:10:02.082822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.010 qpair failed and we were unable to recover it. 00:36:09.010 [2024-11-18 08:10:02.092687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.010 [2024-11-18 08:10:02.092835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.010 [2024-11-18 08:10:02.092862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.010 [2024-11-18 08:10:02.092877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.010 [2024-11-18 08:10:02.092889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.010 [2024-11-18 08:10:02.092920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.010 qpair failed and we were unable to recover it. 00:36:09.269 [2024-11-18 08:10:02.102635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.269 [2024-11-18 08:10:02.102729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.269 [2024-11-18 08:10:02.102758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.269 [2024-11-18 08:10:02.102773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.269 [2024-11-18 08:10:02.102784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.269 [2024-11-18 08:10:02.102815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.269 qpair failed and we were unable to recover it. 00:36:09.269 [2024-11-18 08:10:02.112698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.269 [2024-11-18 08:10:02.112816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.269 [2024-11-18 08:10:02.112843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.269 [2024-11-18 08:10:02.112857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.269 [2024-11-18 08:10:02.112869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.269 [2024-11-18 08:10:02.112900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.269 qpair failed and we were unable to recover it. 00:36:09.269 [2024-11-18 08:10:02.122689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.269 [2024-11-18 08:10:02.122779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.269 [2024-11-18 08:10:02.122809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.269 [2024-11-18 08:10:02.122825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.269 [2024-11-18 08:10:02.122838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.269 [2024-11-18 08:10:02.122868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.269 qpair failed and we were unable to recover it. 00:36:09.269 [2024-11-18 08:10:02.132772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.269 [2024-11-18 08:10:02.132894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.269 [2024-11-18 08:10:02.132923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.269 [2024-11-18 08:10:02.132938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.269 [2024-11-18 08:10:02.132950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.269 [2024-11-18 08:10:02.132980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.269 qpair failed and we were unable to recover it. 00:36:09.269 [2024-11-18 08:10:02.142787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.269 [2024-11-18 08:10:02.142880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.269 [2024-11-18 08:10:02.142906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.269 [2024-11-18 08:10:02.142920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.269 [2024-11-18 08:10:02.142931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.269 [2024-11-18 08:10:02.142976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.269 qpair failed and we were unable to recover it. 00:36:09.269 [2024-11-18 08:10:02.152864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.269 [2024-11-18 08:10:02.152955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.269 [2024-11-18 08:10:02.152986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.269 [2024-11-18 08:10:02.153000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.269 [2024-11-18 08:10:02.153012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.269 [2024-11-18 08:10:02.153042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.269 qpair failed and we were unable to recover it. 00:36:09.269 [2024-11-18 08:10:02.162818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.269 [2024-11-18 08:10:02.162955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.269 [2024-11-18 08:10:02.162981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.269 [2024-11-18 08:10:02.162995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.269 [2024-11-18 08:10:02.163007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.269 [2024-11-18 08:10:02.163037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.269 qpair failed and we were unable to recover it. 00:36:09.269 [2024-11-18 08:10:02.172873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.269 [2024-11-18 08:10:02.172963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.270 [2024-11-18 08:10:02.172990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.270 [2024-11-18 08:10:02.173004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.270 [2024-11-18 08:10:02.173015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.270 [2024-11-18 08:10:02.173045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.270 qpair failed and we were unable to recover it. 00:36:09.270 [2024-11-18 08:10:02.182884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.270 [2024-11-18 08:10:02.183007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.270 [2024-11-18 08:10:02.183046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.270 [2024-11-18 08:10:02.183064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.270 [2024-11-18 08:10:02.183076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.270 [2024-11-18 08:10:02.183107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.270 qpair failed and we were unable to recover it. 00:36:09.270 [2024-11-18 08:10:02.192868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.270 [2024-11-18 08:10:02.192957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.270 [2024-11-18 08:10:02.192983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.270 [2024-11-18 08:10:02.192997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.270 [2024-11-18 08:10:02.193015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.270 [2024-11-18 08:10:02.193045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.270 qpair failed and we were unable to recover it. 00:36:09.270 [2024-11-18 08:10:02.202928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.270 [2024-11-18 08:10:02.203009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.270 [2024-11-18 08:10:02.203035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.270 [2024-11-18 08:10:02.203049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.270 [2024-11-18 08:10:02.203061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.270 [2024-11-18 08:10:02.203104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.270 qpair failed and we were unable to recover it. 00:36:09.270 [2024-11-18 08:10:02.212998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.270 [2024-11-18 08:10:02.213101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.270 [2024-11-18 08:10:02.213127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.270 [2024-11-18 08:10:02.213141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.270 [2024-11-18 08:10:02.213152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.270 [2024-11-18 08:10:02.213182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.270 qpair failed and we were unable to recover it. 00:36:09.270 [2024-11-18 08:10:02.223037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.270 [2024-11-18 08:10:02.223129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.270 [2024-11-18 08:10:02.223155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.270 [2024-11-18 08:10:02.223170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.270 [2024-11-18 08:10:02.223181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.270 [2024-11-18 08:10:02.223211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.270 qpair failed and we were unable to recover it. 00:36:09.270 [2024-11-18 08:10:02.233071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.270 [2024-11-18 08:10:02.233152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.270 [2024-11-18 08:10:02.233178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.270 [2024-11-18 08:10:02.233193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.270 [2024-11-18 08:10:02.233204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.270 [2024-11-18 08:10:02.233234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.270 qpair failed and we were unable to recover it. 00:36:09.270 [2024-11-18 08:10:02.243086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.270 [2024-11-18 08:10:02.243166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.270 [2024-11-18 08:10:02.243191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.270 [2024-11-18 08:10:02.243205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.270 [2024-11-18 08:10:02.243218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.270 [2024-11-18 08:10:02.243248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.270 qpair failed and we were unable to recover it. 00:36:09.270 [2024-11-18 08:10:02.253149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.270 [2024-11-18 08:10:02.253242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.270 [2024-11-18 08:10:02.253268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.270 [2024-11-18 08:10:02.253283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.270 [2024-11-18 08:10:02.253295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.270 [2024-11-18 08:10:02.253324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.270 qpair failed and we were unable to recover it. 00:36:09.270 [2024-11-18 08:10:02.263184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.270 [2024-11-18 08:10:02.263285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.270 [2024-11-18 08:10:02.263314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.270 [2024-11-18 08:10:02.263330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.270 [2024-11-18 08:10:02.263342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.270 [2024-11-18 08:10:02.263373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.270 qpair failed and we were unable to recover it. 00:36:09.270 [2024-11-18 08:10:02.273106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.270 [2024-11-18 08:10:02.273194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.270 [2024-11-18 08:10:02.273220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.270 [2024-11-18 08:10:02.273234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.270 [2024-11-18 08:10:02.273246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.270 [2024-11-18 08:10:02.273276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.270 qpair failed and we were unable to recover it. 00:36:09.270 [2024-11-18 08:10:02.283134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.270 [2024-11-18 08:10:02.283221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.270 [2024-11-18 08:10:02.283252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.270 [2024-11-18 08:10:02.283267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.270 [2024-11-18 08:10:02.283278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.270 [2024-11-18 08:10:02.283308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.270 qpair failed and we were unable to recover it. 00:36:09.270 [2024-11-18 08:10:02.293292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.270 [2024-11-18 08:10:02.293385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.270 [2024-11-18 08:10:02.293411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.270 [2024-11-18 08:10:02.293425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.270 [2024-11-18 08:10:02.293436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.270 [2024-11-18 08:10:02.293465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.270 qpair failed and we were unable to recover it. 00:36:09.271 [2024-11-18 08:10:02.303221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.271 [2024-11-18 08:10:02.303336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.271 [2024-11-18 08:10:02.303362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.271 [2024-11-18 08:10:02.303376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.271 [2024-11-18 08:10:02.303388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.271 [2024-11-18 08:10:02.303417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.271 qpair failed and we were unable to recover it. 00:36:09.271 [2024-11-18 08:10:02.313266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.271 [2024-11-18 08:10:02.313351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.271 [2024-11-18 08:10:02.313377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.271 [2024-11-18 08:10:02.313391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.271 [2024-11-18 08:10:02.313403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.271 [2024-11-18 08:10:02.313433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.271 qpair failed and we were unable to recover it. 00:36:09.271 [2024-11-18 08:10:02.323247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.271 [2024-11-18 08:10:02.323339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.271 [2024-11-18 08:10:02.323365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.271 [2024-11-18 08:10:02.323385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.271 [2024-11-18 08:10:02.323398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.271 [2024-11-18 08:10:02.323428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.271 qpair failed and we were unable to recover it. 00:36:09.271 [2024-11-18 08:10:02.333317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.271 [2024-11-18 08:10:02.333406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.271 [2024-11-18 08:10:02.333432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.271 [2024-11-18 08:10:02.333446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.271 [2024-11-18 08:10:02.333458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.271 [2024-11-18 08:10:02.333488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.271 qpair failed and we were unable to recover it. 00:36:09.271 [2024-11-18 08:10:02.343330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.271 [2024-11-18 08:10:02.343424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.271 [2024-11-18 08:10:02.343457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.271 [2024-11-18 08:10:02.343475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.271 [2024-11-18 08:10:02.343487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.271 [2024-11-18 08:10:02.343536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.271 qpair failed and we were unable to recover it. 00:36:09.271 [2024-11-18 08:10:02.353401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.271 [2024-11-18 08:10:02.353536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.271 [2024-11-18 08:10:02.353564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.271 [2024-11-18 08:10:02.353578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.271 [2024-11-18 08:10:02.353590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.271 [2024-11-18 08:10:02.353622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.271 qpair failed and we were unable to recover it. 00:36:09.531 [2024-11-18 08:10:02.363466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.531 [2024-11-18 08:10:02.363572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.531 [2024-11-18 08:10:02.363601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.531 [2024-11-18 08:10:02.363616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.531 [2024-11-18 08:10:02.363628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.531 [2024-11-18 08:10:02.363660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.531 qpair failed and we were unable to recover it. 00:36:09.531 [2024-11-18 08:10:02.373431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.531 [2024-11-18 08:10:02.373535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.531 [2024-11-18 08:10:02.373562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.531 [2024-11-18 08:10:02.373577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.531 [2024-11-18 08:10:02.373588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.531 [2024-11-18 08:10:02.373619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.531 qpair failed and we were unable to recover it. 00:36:09.531 [2024-11-18 08:10:02.383455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.531 [2024-11-18 08:10:02.383582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.531 [2024-11-18 08:10:02.383608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.531 [2024-11-18 08:10:02.383622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.531 [2024-11-18 08:10:02.383634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.531 [2024-11-18 08:10:02.383664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.531 qpair failed and we were unable to recover it. 00:36:09.531 [2024-11-18 08:10:02.393467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.531 [2024-11-18 08:10:02.393560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.531 [2024-11-18 08:10:02.393585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.531 [2024-11-18 08:10:02.393598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.531 [2024-11-18 08:10:02.393610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.531 [2024-11-18 08:10:02.393640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.531 qpair failed and we were unable to recover it. 00:36:09.531 [2024-11-18 08:10:02.403551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.531 [2024-11-18 08:10:02.403637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.531 [2024-11-18 08:10:02.403664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.531 [2024-11-18 08:10:02.403678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.531 [2024-11-18 08:10:02.403690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.531 [2024-11-18 08:10:02.403720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.531 qpair failed and we were unable to recover it. 00:36:09.531 [2024-11-18 08:10:02.413534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.531 [2024-11-18 08:10:02.413629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.531 [2024-11-18 08:10:02.413654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.531 [2024-11-18 08:10:02.413668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.531 [2024-11-18 08:10:02.413680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.531 [2024-11-18 08:10:02.413710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.531 qpair failed and we were unable to recover it. 00:36:09.531 [2024-11-18 08:10:02.423549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.531 [2024-11-18 08:10:02.423634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.531 [2024-11-18 08:10:02.423659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.531 [2024-11-18 08:10:02.423673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.531 [2024-11-18 08:10:02.423684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.531 [2024-11-18 08:10:02.423714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.531 qpair failed and we were unable to recover it. 00:36:09.531 [2024-11-18 08:10:02.433569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.531 [2024-11-18 08:10:02.433652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.531 [2024-11-18 08:10:02.433678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.531 [2024-11-18 08:10:02.433691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.531 [2024-11-18 08:10:02.433703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.531 [2024-11-18 08:10:02.433732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.531 qpair failed and we were unable to recover it. 00:36:09.531 [2024-11-18 08:10:02.443692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.531 [2024-11-18 08:10:02.443769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.531 [2024-11-18 08:10:02.443795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.531 [2024-11-18 08:10:02.443809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.531 [2024-11-18 08:10:02.443821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.531 [2024-11-18 08:10:02.443865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.531 qpair failed and we were unable to recover it. 00:36:09.531 [2024-11-18 08:10:02.453640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.531 [2024-11-18 08:10:02.453778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.531 [2024-11-18 08:10:02.453804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.531 [2024-11-18 08:10:02.453823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.531 [2024-11-18 08:10:02.453835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.531 [2024-11-18 08:10:02.453865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.531 qpair failed and we were unable to recover it. 00:36:09.531 [2024-11-18 08:10:02.463656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.531 [2024-11-18 08:10:02.463735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.531 [2024-11-18 08:10:02.463761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.531 [2024-11-18 08:10:02.463774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.531 [2024-11-18 08:10:02.463786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.531 [2024-11-18 08:10:02.463816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.531 qpair failed and we were unable to recover it. 00:36:09.531 [2024-11-18 08:10:02.473696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.531 [2024-11-18 08:10:02.473817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.531 [2024-11-18 08:10:02.473842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.531 [2024-11-18 08:10:02.473856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.531 [2024-11-18 08:10:02.473867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.531 [2024-11-18 08:10:02.473897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.531 qpair failed and we were unable to recover it. 00:36:09.531 [2024-11-18 08:10:02.483705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.531 [2024-11-18 08:10:02.483788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.531 [2024-11-18 08:10:02.483814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.531 [2024-11-18 08:10:02.483828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.531 [2024-11-18 08:10:02.483840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.531 [2024-11-18 08:10:02.483869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.531 qpair failed and we were unable to recover it. 00:36:09.531 [2024-11-18 08:10:02.493846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.531 [2024-11-18 08:10:02.493937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.531 [2024-11-18 08:10:02.493963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.531 [2024-11-18 08:10:02.493976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.531 [2024-11-18 08:10:02.493988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.531 [2024-11-18 08:10:02.494023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.531 qpair failed and we were unable to recover it. 00:36:09.532 [2024-11-18 08:10:02.503905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.532 [2024-11-18 08:10:02.504032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.532 [2024-11-18 08:10:02.504058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.532 [2024-11-18 08:10:02.504072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.532 [2024-11-18 08:10:02.504084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.532 [2024-11-18 08:10:02.504128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.532 qpair failed and we were unable to recover it. 00:36:09.532 [2024-11-18 08:10:02.513905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.532 [2024-11-18 08:10:02.513995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.532 [2024-11-18 08:10:02.514022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.532 [2024-11-18 08:10:02.514036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.532 [2024-11-18 08:10:02.514047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.532 [2024-11-18 08:10:02.514090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.532 qpair failed and we were unable to recover it. 00:36:09.532 [2024-11-18 08:10:02.523914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.532 [2024-11-18 08:10:02.523999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.532 [2024-11-18 08:10:02.524025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.532 [2024-11-18 08:10:02.524040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.532 [2024-11-18 08:10:02.524051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.532 [2024-11-18 08:10:02.524081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.532 qpair failed and we were unable to recover it. 00:36:09.532 [2024-11-18 08:10:02.533866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.532 [2024-11-18 08:10:02.533993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.532 [2024-11-18 08:10:02.534020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.532 [2024-11-18 08:10:02.534034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.532 [2024-11-18 08:10:02.534045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.532 [2024-11-18 08:10:02.534075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.532 qpair failed and we were unable to recover it. 00:36:09.532 [2024-11-18 08:10:02.543939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.532 [2024-11-18 08:10:02.544048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.532 [2024-11-18 08:10:02.544074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.532 [2024-11-18 08:10:02.544087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.532 [2024-11-18 08:10:02.544099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.532 [2024-11-18 08:10:02.544129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.532 qpair failed and we were unable to recover it. 00:36:09.532 [2024-11-18 08:10:02.553951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.532 [2024-11-18 08:10:02.554036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.532 [2024-11-18 08:10:02.554062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.532 [2024-11-18 08:10:02.554076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.532 [2024-11-18 08:10:02.554087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.532 [2024-11-18 08:10:02.554116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.532 qpair failed and we were unable to recover it. 00:36:09.532 [2024-11-18 08:10:02.564000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.532 [2024-11-18 08:10:02.564102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.532 [2024-11-18 08:10:02.564128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.532 [2024-11-18 08:10:02.564142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.532 [2024-11-18 08:10:02.564154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.532 [2024-11-18 08:10:02.564197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.532 qpair failed and we were unable to recover it. 00:36:09.532 [2024-11-18 08:10:02.574075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.532 [2024-11-18 08:10:02.574168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.532 [2024-11-18 08:10:02.574194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.532 [2024-11-18 08:10:02.574208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.532 [2024-11-18 08:10:02.574220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.532 [2024-11-18 08:10:02.574250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.532 qpair failed and we were unable to recover it. 00:36:09.532 [2024-11-18 08:10:02.584015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.532 [2024-11-18 08:10:02.584143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.532 [2024-11-18 08:10:02.584175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.532 [2024-11-18 08:10:02.584190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.532 [2024-11-18 08:10:02.584202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.532 [2024-11-18 08:10:02.584232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.532 qpair failed and we were unable to recover it. 00:36:09.532 [2024-11-18 08:10:02.594023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.532 [2024-11-18 08:10:02.594136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.532 [2024-11-18 08:10:02.594166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.532 [2024-11-18 08:10:02.594181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.532 [2024-11-18 08:10:02.594193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.532 [2024-11-18 08:10:02.594225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.532 qpair failed and we were unable to recover it. 00:36:09.532 [2024-11-18 08:10:02.604057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.532 [2024-11-18 08:10:02.604140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.532 [2024-11-18 08:10:02.604167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.532 [2024-11-18 08:10:02.604182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.532 [2024-11-18 08:10:02.604193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.532 [2024-11-18 08:10:02.604224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.532 qpair failed and we were unable to recover it. 00:36:09.532 [2024-11-18 08:10:02.614222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.532 [2024-11-18 08:10:02.614349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.532 [2024-11-18 08:10:02.614377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.532 [2024-11-18 08:10:02.614391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.532 [2024-11-18 08:10:02.614403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.532 [2024-11-18 08:10:02.614434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.532 qpair failed and we were unable to recover it. 00:36:09.792 [2024-11-18 08:10:02.624219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.792 [2024-11-18 08:10:02.624310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.792 [2024-11-18 08:10:02.624338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.792 [2024-11-18 08:10:02.624367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.792 [2024-11-18 08:10:02.624392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.792 [2024-11-18 08:10:02.624425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.792 qpair failed and we were unable to recover it. 00:36:09.792 [2024-11-18 08:10:02.634144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.792 [2024-11-18 08:10:02.634249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.792 [2024-11-18 08:10:02.634276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.792 [2024-11-18 08:10:02.634290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.792 [2024-11-18 08:10:02.634302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.792 [2024-11-18 08:10:02.634332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.792 qpair failed and we were unable to recover it. 00:36:09.793 [2024-11-18 08:10:02.644205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.793 [2024-11-18 08:10:02.644283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.793 [2024-11-18 08:10:02.644310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.793 [2024-11-18 08:10:02.644324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.793 [2024-11-18 08:10:02.644337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.793 [2024-11-18 08:10:02.644380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.793 qpair failed and we were unable to recover it. 00:36:09.793 [2024-11-18 08:10:02.654208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.793 [2024-11-18 08:10:02.654335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.793 [2024-11-18 08:10:02.654362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.793 [2024-11-18 08:10:02.654376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.793 [2024-11-18 08:10:02.654388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.793 [2024-11-18 08:10:02.654417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.793 qpair failed and we were unable to recover it. 00:36:09.793 [2024-11-18 08:10:02.664231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.793 [2024-11-18 08:10:02.664318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.793 [2024-11-18 08:10:02.664344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.793 [2024-11-18 08:10:02.664358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.793 [2024-11-18 08:10:02.664370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.793 [2024-11-18 08:10:02.664400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.793 qpair failed and we were unable to recover it. 00:36:09.793 [2024-11-18 08:10:02.674263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.793 [2024-11-18 08:10:02.674349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.793 [2024-11-18 08:10:02.674375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.793 [2024-11-18 08:10:02.674389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.793 [2024-11-18 08:10:02.674401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.793 [2024-11-18 08:10:02.674431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.793 qpair failed and we were unable to recover it. 00:36:09.793 [2024-11-18 08:10:02.684283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.793 [2024-11-18 08:10:02.684377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.793 [2024-11-18 08:10:02.684407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.793 [2024-11-18 08:10:02.684423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.793 [2024-11-18 08:10:02.684435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.793 [2024-11-18 08:10:02.684465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.793 qpair failed and we were unable to recover it. 00:36:09.793 [2024-11-18 08:10:02.694369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.793 [2024-11-18 08:10:02.694462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.793 [2024-11-18 08:10:02.694488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.793 [2024-11-18 08:10:02.694516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.793 [2024-11-18 08:10:02.694528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.793 [2024-11-18 08:10:02.694560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.793 qpair failed and we were unable to recover it. 00:36:09.793 [2024-11-18 08:10:02.704370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.793 [2024-11-18 08:10:02.704455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.793 [2024-11-18 08:10:02.704482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.793 [2024-11-18 08:10:02.704505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.793 [2024-11-18 08:10:02.704518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.793 [2024-11-18 08:10:02.704548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.793 qpair failed and we were unable to recover it. 00:36:09.793 [2024-11-18 08:10:02.714366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.793 [2024-11-18 08:10:02.714449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.793 [2024-11-18 08:10:02.714484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.793 [2024-11-18 08:10:02.714509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.793 [2024-11-18 08:10:02.714521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.793 [2024-11-18 08:10:02.714551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.793 qpair failed and we were unable to recover it. 00:36:09.793 [2024-11-18 08:10:02.724411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.793 [2024-11-18 08:10:02.724497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.793 [2024-11-18 08:10:02.724525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.793 [2024-11-18 08:10:02.724540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.793 [2024-11-18 08:10:02.724552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.793 [2024-11-18 08:10:02.724581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.793 qpair failed and we were unable to recover it. 00:36:09.793 [2024-11-18 08:10:02.734420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.793 [2024-11-18 08:10:02.734518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.793 [2024-11-18 08:10:02.734544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.793 [2024-11-18 08:10:02.734558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.793 [2024-11-18 08:10:02.734570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.793 [2024-11-18 08:10:02.734600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.793 qpair failed and we were unable to recover it. 00:36:09.793 [2024-11-18 08:10:02.744477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.793 [2024-11-18 08:10:02.744576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.793 [2024-11-18 08:10:02.744601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.793 [2024-11-18 08:10:02.744615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.793 [2024-11-18 08:10:02.744626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.793 [2024-11-18 08:10:02.744657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.794 qpair failed and we were unable to recover it. 00:36:09.794 [2024-11-18 08:10:02.754473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.794 [2024-11-18 08:10:02.754613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.794 [2024-11-18 08:10:02.754639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.794 [2024-11-18 08:10:02.754653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.794 [2024-11-18 08:10:02.754672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.794 [2024-11-18 08:10:02.754703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.794 qpair failed and we were unable to recover it. 00:36:09.794 [2024-11-18 08:10:02.764518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.794 [2024-11-18 08:10:02.764617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.794 [2024-11-18 08:10:02.764645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.794 [2024-11-18 08:10:02.764660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.794 [2024-11-18 08:10:02.764672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.794 [2024-11-18 08:10:02.764702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.794 qpair failed and we were unable to recover it. 00:36:09.794 [2024-11-18 08:10:02.774579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.794 [2024-11-18 08:10:02.774677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.794 [2024-11-18 08:10:02.774703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.794 [2024-11-18 08:10:02.774716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.794 [2024-11-18 08:10:02.774729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.794 [2024-11-18 08:10:02.774758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.794 qpair failed and we were unable to recover it. 00:36:09.794 [2024-11-18 08:10:02.784583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.794 [2024-11-18 08:10:02.784709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.794 [2024-11-18 08:10:02.784734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.794 [2024-11-18 08:10:02.784748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.794 [2024-11-18 08:10:02.784760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.794 [2024-11-18 08:10:02.784789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.794 qpair failed and we were unable to recover it. 00:36:09.794 [2024-11-18 08:10:02.794616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.794 [2024-11-18 08:10:02.794742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.794 [2024-11-18 08:10:02.794768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.794 [2024-11-18 08:10:02.794782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.794 [2024-11-18 08:10:02.794794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.794 [2024-11-18 08:10:02.794823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.794 qpair failed and we were unable to recover it. 00:36:09.794 [2024-11-18 08:10:02.804652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.794 [2024-11-18 08:10:02.804741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.794 [2024-11-18 08:10:02.804767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.794 [2024-11-18 08:10:02.804782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.794 [2024-11-18 08:10:02.804793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.794 [2024-11-18 08:10:02.804823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.794 qpair failed and we were unable to recover it. 00:36:09.794 [2024-11-18 08:10:02.814707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.794 [2024-11-18 08:10:02.814802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.794 [2024-11-18 08:10:02.814827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.794 [2024-11-18 08:10:02.814841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.794 [2024-11-18 08:10:02.814852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.794 [2024-11-18 08:10:02.814882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.794 qpair failed and we were unable to recover it. 00:36:09.794 [2024-11-18 08:10:02.824801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.794 [2024-11-18 08:10:02.824931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.794 [2024-11-18 08:10:02.824956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.794 [2024-11-18 08:10:02.824970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.794 [2024-11-18 08:10:02.824982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.794 [2024-11-18 08:10:02.825012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.794 qpair failed and we were unable to recover it. 00:36:09.794 [2024-11-18 08:10:02.834724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.794 [2024-11-18 08:10:02.834821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.794 [2024-11-18 08:10:02.834847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.794 [2024-11-18 08:10:02.834861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.794 [2024-11-18 08:10:02.834872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.794 [2024-11-18 08:10:02.834902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.794 qpair failed and we were unable to recover it. 00:36:09.794 [2024-11-18 08:10:02.844745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.794 [2024-11-18 08:10:02.844834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.794 [2024-11-18 08:10:02.844877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.794 [2024-11-18 08:10:02.844894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.794 [2024-11-18 08:10:02.844906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.794 [2024-11-18 08:10:02.844937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.794 qpair failed and we were unable to recover it. 00:36:09.794 [2024-11-18 08:10:02.854865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.794 [2024-11-18 08:10:02.855002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.794 [2024-11-18 08:10:02.855029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.794 [2024-11-18 08:10:02.855043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.794 [2024-11-18 08:10:02.855054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.795 [2024-11-18 08:10:02.855098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.795 qpair failed and we were unable to recover it. 00:36:09.795 [2024-11-18 08:10:02.864848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.795 [2024-11-18 08:10:02.864930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.795 [2024-11-18 08:10:02.864956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.795 [2024-11-18 08:10:02.864970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.795 [2024-11-18 08:10:02.864981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.795 [2024-11-18 08:10:02.865012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.795 qpair failed and we were unable to recover it. 00:36:09.795 [2024-11-18 08:10:02.874836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.795 [2024-11-18 08:10:02.874953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.795 [2024-11-18 08:10:02.874979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.795 [2024-11-18 08:10:02.874993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.795 [2024-11-18 08:10:02.875005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:09.795 [2024-11-18 08:10:02.875035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.795 qpair failed and we were unable to recover it. 00:36:10.055 [2024-11-18 08:10:02.884946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.055 [2024-11-18 08:10:02.885075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.055 [2024-11-18 08:10:02.885112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.055 [2024-11-18 08:10:02.885136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.055 [2024-11-18 08:10:02.885149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.055 [2024-11-18 08:10:02.885181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.055 qpair failed and we were unable to recover it. 00:36:10.055 [2024-11-18 08:10:02.894922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.055 [2024-11-18 08:10:02.895013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.055 [2024-11-18 08:10:02.895041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.055 [2024-11-18 08:10:02.895055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.055 [2024-11-18 08:10:02.895067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.055 [2024-11-18 08:10:02.895097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.055 qpair failed and we were unable to recover it. 00:36:10.055 [2024-11-18 08:10:02.905000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.055 [2024-11-18 08:10:02.905089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.055 [2024-11-18 08:10:02.905116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.055 [2024-11-18 08:10:02.905130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.055 [2024-11-18 08:10:02.905142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.055 [2024-11-18 08:10:02.905172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.055 qpair failed and we were unable to recover it. 00:36:10.055 [2024-11-18 08:10:02.915007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.055 [2024-11-18 08:10:02.915116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.055 [2024-11-18 08:10:02.915141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.055 [2024-11-18 08:10:02.915155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.055 [2024-11-18 08:10:02.915167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.055 [2024-11-18 08:10:02.915196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.055 qpair failed and we were unable to recover it. 00:36:10.055 [2024-11-18 08:10:02.925003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.055 [2024-11-18 08:10:02.925133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.055 [2024-11-18 08:10:02.925159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.055 [2024-11-18 08:10:02.925173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.055 [2024-11-18 08:10:02.925184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.055 [2024-11-18 08:10:02.925220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.055 qpair failed and we were unable to recover it. 00:36:10.055 [2024-11-18 08:10:02.935072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.055 [2024-11-18 08:10:02.935166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.055 [2024-11-18 08:10:02.935192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.055 [2024-11-18 08:10:02.935206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.056 [2024-11-18 08:10:02.935217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.056 [2024-11-18 08:10:02.935247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.056 qpair failed and we were unable to recover it. 00:36:10.056 [2024-11-18 08:10:02.945119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.056 [2024-11-18 08:10:02.945206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.056 [2024-11-18 08:10:02.945232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.056 [2024-11-18 08:10:02.945246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.056 [2024-11-18 08:10:02.945258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.056 [2024-11-18 08:10:02.945301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.056 qpair failed and we were unable to recover it. 00:36:10.056 [2024-11-18 08:10:02.955075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.056 [2024-11-18 08:10:02.955168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.056 [2024-11-18 08:10:02.955195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.056 [2024-11-18 08:10:02.955209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.056 [2024-11-18 08:10:02.955221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.056 [2024-11-18 08:10:02.955250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.056 qpair failed and we were unable to recover it. 00:36:10.056 [2024-11-18 08:10:02.965083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.056 [2024-11-18 08:10:02.965169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.056 [2024-11-18 08:10:02.965195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.056 [2024-11-18 08:10:02.965209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.056 [2024-11-18 08:10:02.965221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.056 [2024-11-18 08:10:02.965251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.056 qpair failed and we were unable to recover it. 00:36:10.056 [2024-11-18 08:10:02.975226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.056 [2024-11-18 08:10:02.975331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.056 [2024-11-18 08:10:02.975361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.056 [2024-11-18 08:10:02.975378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.056 [2024-11-18 08:10:02.975390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.056 [2024-11-18 08:10:02.975421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.056 qpair failed and we were unable to recover it. 00:36:10.056 [2024-11-18 08:10:02.985172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.056 [2024-11-18 08:10:02.985290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.056 [2024-11-18 08:10:02.985317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.056 [2024-11-18 08:10:02.985331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.056 [2024-11-18 08:10:02.985343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.056 [2024-11-18 08:10:02.985373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.056 qpair failed and we were unable to recover it. 00:36:10.056 [2024-11-18 08:10:02.995171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.056 [2024-11-18 08:10:02.995261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.056 [2024-11-18 08:10:02.995286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.056 [2024-11-18 08:10:02.995300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.056 [2024-11-18 08:10:02.995312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.056 [2024-11-18 08:10:02.995342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.056 qpair failed and we were unable to recover it. 00:36:10.056 [2024-11-18 08:10:03.005321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.056 [2024-11-18 08:10:03.005404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.056 [2024-11-18 08:10:03.005430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.056 [2024-11-18 08:10:03.005444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.056 [2024-11-18 08:10:03.005456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.056 [2024-11-18 08:10:03.005486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.056 qpair failed and we were unable to recover it. 00:36:10.056 [2024-11-18 08:10:03.015235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.056 [2024-11-18 08:10:03.015323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.056 [2024-11-18 08:10:03.015349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.056 [2024-11-18 08:10:03.015369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.056 [2024-11-18 08:10:03.015381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.056 [2024-11-18 08:10:03.015412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.056 qpair failed and we were unable to recover it. 00:36:10.056 [2024-11-18 08:10:03.025267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.056 [2024-11-18 08:10:03.025361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.056 [2024-11-18 08:10:03.025387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.056 [2024-11-18 08:10:03.025402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.056 [2024-11-18 08:10:03.025415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.056 [2024-11-18 08:10:03.025446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.056 qpair failed and we were unable to recover it. 00:36:10.056 [2024-11-18 08:10:03.035293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.056 [2024-11-18 08:10:03.035382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.056 [2024-11-18 08:10:03.035408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.056 [2024-11-18 08:10:03.035421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.056 [2024-11-18 08:10:03.035433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.056 [2024-11-18 08:10:03.035464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.056 qpair failed and we were unable to recover it. 00:36:10.056 [2024-11-18 08:10:03.045345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.057 [2024-11-18 08:10:03.045458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.057 [2024-11-18 08:10:03.045487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.057 [2024-11-18 08:10:03.045514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.057 [2024-11-18 08:10:03.045528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.057 [2024-11-18 08:10:03.045563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.057 qpair failed and we were unable to recover it. 00:36:10.057 [2024-11-18 08:10:03.055396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.057 [2024-11-18 08:10:03.055482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.057 [2024-11-18 08:10:03.055517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.057 [2024-11-18 08:10:03.055532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.057 [2024-11-18 08:10:03.055544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.057 [2024-11-18 08:10:03.055593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.057 qpair failed and we were unable to recover it. 00:36:10.057 [2024-11-18 08:10:03.065368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.057 [2024-11-18 08:10:03.065480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.057 [2024-11-18 08:10:03.065513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.057 [2024-11-18 08:10:03.065528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.057 [2024-11-18 08:10:03.065540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.057 [2024-11-18 08:10:03.065569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.057 qpair failed and we were unable to recover it. 00:36:10.057 [2024-11-18 08:10:03.075441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.057 [2024-11-18 08:10:03.075537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.057 [2024-11-18 08:10:03.075564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.057 [2024-11-18 08:10:03.075578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.057 [2024-11-18 08:10:03.075589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.057 [2024-11-18 08:10:03.075619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.057 qpair failed and we were unable to recover it. 00:36:10.057 [2024-11-18 08:10:03.085519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.057 [2024-11-18 08:10:03.085605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.057 [2024-11-18 08:10:03.085631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.057 [2024-11-18 08:10:03.085645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.057 [2024-11-18 08:10:03.085657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.057 [2024-11-18 08:10:03.085687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.057 qpair failed and we were unable to recover it. 00:36:10.057 [2024-11-18 08:10:03.095456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.057 [2024-11-18 08:10:03.095559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.057 [2024-11-18 08:10:03.095591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.057 [2024-11-18 08:10:03.095607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.057 [2024-11-18 08:10:03.095619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.057 [2024-11-18 08:10:03.095650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.057 qpair failed and we were unable to recover it. 00:36:10.057 [2024-11-18 08:10:03.105529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.057 [2024-11-18 08:10:03.105622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.057 [2024-11-18 08:10:03.105649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.057 [2024-11-18 08:10:03.105664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.057 [2024-11-18 08:10:03.105675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.057 [2024-11-18 08:10:03.105706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.057 qpair failed and we were unable to recover it. 00:36:10.057 [2024-11-18 08:10:03.115551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.057 [2024-11-18 08:10:03.115662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.057 [2024-11-18 08:10:03.115689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.057 [2024-11-18 08:10:03.115703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.057 [2024-11-18 08:10:03.115715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.057 [2024-11-18 08:10:03.115745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.057 qpair failed and we were unable to recover it. 00:36:10.057 [2024-11-18 08:10:03.125620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.057 [2024-11-18 08:10:03.125703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.057 [2024-11-18 08:10:03.125729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.057 [2024-11-18 08:10:03.125743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.057 [2024-11-18 08:10:03.125755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.057 [2024-11-18 08:10:03.125799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.057 qpair failed and we were unable to recover it. 00:36:10.057 [2024-11-18 08:10:03.135665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.057 [2024-11-18 08:10:03.135811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.057 [2024-11-18 08:10:03.135837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.057 [2024-11-18 08:10:03.135851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.057 [2024-11-18 08:10:03.135862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.057 [2024-11-18 08:10:03.135908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.057 qpair failed and we were unable to recover it. 00:36:10.317 [2024-11-18 08:10:03.145649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.317 [2024-11-18 08:10:03.145761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.317 [2024-11-18 08:10:03.145794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.317 [2024-11-18 08:10:03.145809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.317 [2024-11-18 08:10:03.145821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.317 [2024-11-18 08:10:03.145852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.317 qpair failed and we were unable to recover it. 00:36:10.317 [2024-11-18 08:10:03.155706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.317 [2024-11-18 08:10:03.155842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.317 [2024-11-18 08:10:03.155869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.317 [2024-11-18 08:10:03.155883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.317 [2024-11-18 08:10:03.155896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.317 [2024-11-18 08:10:03.155940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.317 qpair failed and we were unable to recover it. 00:36:10.317 [2024-11-18 08:10:03.165672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.317 [2024-11-18 08:10:03.165760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.317 [2024-11-18 08:10:03.165786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.317 [2024-11-18 08:10:03.165800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.317 [2024-11-18 08:10:03.165812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.317 [2024-11-18 08:10:03.165842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.317 qpair failed and we were unable to recover it. 00:36:10.317 [2024-11-18 08:10:03.175807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.317 [2024-11-18 08:10:03.175945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.317 [2024-11-18 08:10:03.175971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.317 [2024-11-18 08:10:03.175985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.317 [2024-11-18 08:10:03.175997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.317 [2024-11-18 08:10:03.176027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.317 qpair failed and we were unable to recover it. 00:36:10.317 [2024-11-18 08:10:03.185732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.317 [2024-11-18 08:10:03.185821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.317 [2024-11-18 08:10:03.185848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.318 [2024-11-18 08:10:03.185862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.318 [2024-11-18 08:10:03.185879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.318 [2024-11-18 08:10:03.185909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.318 qpair failed and we were unable to recover it. 00:36:10.318 [2024-11-18 08:10:03.195781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.318 [2024-11-18 08:10:03.195911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.318 [2024-11-18 08:10:03.195938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.318 [2024-11-18 08:10:03.195952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.318 [2024-11-18 08:10:03.195963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.318 [2024-11-18 08:10:03.195993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.318 qpair failed and we were unable to recover it. 00:36:10.318 [2024-11-18 08:10:03.205798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.318 [2024-11-18 08:10:03.205897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.318 [2024-11-18 08:10:03.205923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.318 [2024-11-18 08:10:03.205937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.318 [2024-11-18 08:10:03.205949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.318 [2024-11-18 08:10:03.205979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.318 qpair failed and we were unable to recover it. 00:36:10.318 [2024-11-18 08:10:03.215905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.318 [2024-11-18 08:10:03.216008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.318 [2024-11-18 08:10:03.216034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.318 [2024-11-18 08:10:03.216048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.318 [2024-11-18 08:10:03.216059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.318 [2024-11-18 08:10:03.216089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.318 qpair failed and we were unable to recover it. 00:36:10.318 [2024-11-18 08:10:03.225868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.318 [2024-11-18 08:10:03.225953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.318 [2024-11-18 08:10:03.225979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.318 [2024-11-18 08:10:03.225993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.318 [2024-11-18 08:10:03.226005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.318 [2024-11-18 08:10:03.226034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.318 qpair failed and we were unable to recover it. 00:36:10.318 [2024-11-18 08:10:03.235872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.318 [2024-11-18 08:10:03.235966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.318 [2024-11-18 08:10:03.235992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.318 [2024-11-18 08:10:03.236006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.318 [2024-11-18 08:10:03.236017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.318 [2024-11-18 08:10:03.236047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.318 qpair failed and we were unable to recover it. 00:36:10.318 [2024-11-18 08:10:03.245905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.318 [2024-11-18 08:10:03.245984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.318 [2024-11-18 08:10:03.246011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.318 [2024-11-18 08:10:03.246025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.318 [2024-11-18 08:10:03.246037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.318 [2024-11-18 08:10:03.246066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.318 qpair failed and we were unable to recover it. 00:36:10.318 [2024-11-18 08:10:03.255984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.318 [2024-11-18 08:10:03.256083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.318 [2024-11-18 08:10:03.256109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.318 [2024-11-18 08:10:03.256123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.318 [2024-11-18 08:10:03.256135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.318 [2024-11-18 08:10:03.256165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.318 qpair failed and we were unable to recover it. 00:36:10.318 [2024-11-18 08:10:03.265952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.318 [2024-11-18 08:10:03.266044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.318 [2024-11-18 08:10:03.266069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.318 [2024-11-18 08:10:03.266083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.318 [2024-11-18 08:10:03.266094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.318 [2024-11-18 08:10:03.266124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.318 qpair failed and we were unable to recover it. 00:36:10.318 [2024-11-18 08:10:03.276018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.318 [2024-11-18 08:10:03.276114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.318 [2024-11-18 08:10:03.276144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.318 [2024-11-18 08:10:03.276160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.318 [2024-11-18 08:10:03.276172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.318 [2024-11-18 08:10:03.276201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.318 qpair failed and we were unable to recover it. 00:36:10.318 [2024-11-18 08:10:03.286013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.318 [2024-11-18 08:10:03.286095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.318 [2024-11-18 08:10:03.286122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.318 [2024-11-18 08:10:03.286136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.318 [2024-11-18 08:10:03.286148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.318 [2024-11-18 08:10:03.286178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.318 qpair failed and we were unable to recover it. 00:36:10.318 [2024-11-18 08:10:03.296081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.318 [2024-11-18 08:10:03.296173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.318 [2024-11-18 08:10:03.296199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.318 [2024-11-18 08:10:03.296213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.318 [2024-11-18 08:10:03.296225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.318 [2024-11-18 08:10:03.296254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.319 qpair failed and we were unable to recover it. 00:36:10.319 [2024-11-18 08:10:03.306070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.319 [2024-11-18 08:10:03.306151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.319 [2024-11-18 08:10:03.306176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.319 [2024-11-18 08:10:03.306190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.319 [2024-11-18 08:10:03.306201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.319 [2024-11-18 08:10:03.306231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.319 qpair failed and we were unable to recover it. 00:36:10.319 [2024-11-18 08:10:03.316127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.319 [2024-11-18 08:10:03.316219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.319 [2024-11-18 08:10:03.316245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.319 [2024-11-18 08:10:03.316259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.319 [2024-11-18 08:10:03.316276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.319 [2024-11-18 08:10:03.316306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.319 qpair failed and we were unable to recover it. 00:36:10.319 [2024-11-18 08:10:03.326122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.319 [2024-11-18 08:10:03.326213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.319 [2024-11-18 08:10:03.326239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.319 [2024-11-18 08:10:03.326253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.319 [2024-11-18 08:10:03.326265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.319 [2024-11-18 08:10:03.326295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.319 qpair failed and we were unable to recover it. 00:36:10.319 [2024-11-18 08:10:03.336192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.319 [2024-11-18 08:10:03.336278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.319 [2024-11-18 08:10:03.336303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.319 [2024-11-18 08:10:03.336317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.319 [2024-11-18 08:10:03.336329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.319 [2024-11-18 08:10:03.336372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.319 qpair failed and we were unable to recover it. 00:36:10.319 [2024-11-18 08:10:03.346241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.319 [2024-11-18 08:10:03.346341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.319 [2024-11-18 08:10:03.346371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.319 [2024-11-18 08:10:03.346387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.319 [2024-11-18 08:10:03.346399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.319 [2024-11-18 08:10:03.346444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.319 qpair failed and we were unable to recover it. 00:36:10.319 [2024-11-18 08:10:03.356244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.319 [2024-11-18 08:10:03.356344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.319 [2024-11-18 08:10:03.356371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.319 [2024-11-18 08:10:03.356385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.319 [2024-11-18 08:10:03.356397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.319 [2024-11-18 08:10:03.356427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.319 qpair failed and we were unable to recover it. 00:36:10.319 [2024-11-18 08:10:03.366285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.319 [2024-11-18 08:10:03.366372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.319 [2024-11-18 08:10:03.366398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.319 [2024-11-18 08:10:03.366412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.319 [2024-11-18 08:10:03.366424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.319 [2024-11-18 08:10:03.366454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.319 qpair failed and we were unable to recover it. 00:36:10.319 [2024-11-18 08:10:03.376293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.319 [2024-11-18 08:10:03.376383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.319 [2024-11-18 08:10:03.376409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.319 [2024-11-18 08:10:03.376423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.319 [2024-11-18 08:10:03.376435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.319 [2024-11-18 08:10:03.376465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.319 qpair failed and we were unable to recover it. 00:36:10.319 [2024-11-18 08:10:03.386317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.319 [2024-11-18 08:10:03.386442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.319 [2024-11-18 08:10:03.386469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.319 [2024-11-18 08:10:03.386482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.319 [2024-11-18 08:10:03.386505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.319 [2024-11-18 08:10:03.386536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.319 qpair failed and we were unable to recover it. 00:36:10.319 [2024-11-18 08:10:03.396343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.319 [2024-11-18 08:10:03.396433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.319 [2024-11-18 08:10:03.396458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.319 [2024-11-18 08:10:03.396472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.319 [2024-11-18 08:10:03.396484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.319 [2024-11-18 08:10:03.396523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.319 qpair failed and we were unable to recover it. 00:36:10.578 [2024-11-18 08:10:03.406384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.578 [2024-11-18 08:10:03.406483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.578 [2024-11-18 08:10:03.406528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.578 [2024-11-18 08:10:03.406544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.578 [2024-11-18 08:10:03.406556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.578 [2024-11-18 08:10:03.406587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.578 qpair failed and we were unable to recover it. 00:36:10.578 [2024-11-18 08:10:03.416476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.578 [2024-11-18 08:10:03.416579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.578 [2024-11-18 08:10:03.416607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.578 [2024-11-18 08:10:03.416622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.578 [2024-11-18 08:10:03.416634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.578 [2024-11-18 08:10:03.416665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.578 qpair failed and we were unable to recover it. 00:36:10.578 [2024-11-18 08:10:03.426448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.578 [2024-11-18 08:10:03.426557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.578 [2024-11-18 08:10:03.426584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.578 [2024-11-18 08:10:03.426598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.578 [2024-11-18 08:10:03.426610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.578 [2024-11-18 08:10:03.426640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.578 qpair failed and we were unable to recover it. 00:36:10.578 [2024-11-18 08:10:03.436458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.578 [2024-11-18 08:10:03.436559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.578 [2024-11-18 08:10:03.436585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.578 [2024-11-18 08:10:03.436598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.578 [2024-11-18 08:10:03.436610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.578 [2024-11-18 08:10:03.436640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.578 qpair failed and we were unable to recover it. 00:36:10.578 [2024-11-18 08:10:03.446507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.578 [2024-11-18 08:10:03.446603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.578 [2024-11-18 08:10:03.446630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.578 [2024-11-18 08:10:03.446649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.578 [2024-11-18 08:10:03.446662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.578 [2024-11-18 08:10:03.446706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.578 qpair failed and we were unable to recover it. 00:36:10.578 [2024-11-18 08:10:03.456623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.578 [2024-11-18 08:10:03.456725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.578 [2024-11-18 08:10:03.456751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.578 [2024-11-18 08:10:03.456765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.578 [2024-11-18 08:10:03.456778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.578 [2024-11-18 08:10:03.456808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.578 qpair failed and we were unable to recover it. 00:36:10.578 [2024-11-18 08:10:03.466640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.578 [2024-11-18 08:10:03.466724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.578 [2024-11-18 08:10:03.466750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.578 [2024-11-18 08:10:03.466764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.578 [2024-11-18 08:10:03.466776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.578 [2024-11-18 08:10:03.466806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.578 qpair failed and we were unable to recover it. 00:36:10.578 [2024-11-18 08:10:03.476584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.578 [2024-11-18 08:10:03.476684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.578 [2024-11-18 08:10:03.476710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.578 [2024-11-18 08:10:03.476725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.578 [2024-11-18 08:10:03.476737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.578 [2024-11-18 08:10:03.476766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.578 qpair failed and we were unable to recover it. 00:36:10.578 [2024-11-18 08:10:03.486710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.578 [2024-11-18 08:10:03.486801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.578 [2024-11-18 08:10:03.486827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.578 [2024-11-18 08:10:03.486841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.578 [2024-11-18 08:10:03.486853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.578 [2024-11-18 08:10:03.486888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.578 qpair failed and we were unable to recover it. 00:36:10.578 [2024-11-18 08:10:03.496670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.578 [2024-11-18 08:10:03.496784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.578 [2024-11-18 08:10:03.496809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.578 [2024-11-18 08:10:03.496823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.578 [2024-11-18 08:10:03.496835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.578 [2024-11-18 08:10:03.496864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.578 qpair failed and we were unable to recover it. 00:36:10.578 [2024-11-18 08:10:03.506658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.578 [2024-11-18 08:10:03.506743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.578 [2024-11-18 08:10:03.506769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.578 [2024-11-18 08:10:03.506784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.578 [2024-11-18 08:10:03.506795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.578 [2024-11-18 08:10:03.506825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.578 qpair failed and we were unable to recover it. 00:36:10.578 [2024-11-18 08:10:03.516712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.578 [2024-11-18 08:10:03.516839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.578 [2024-11-18 08:10:03.516864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.578 [2024-11-18 08:10:03.516879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.578 [2024-11-18 08:10:03.516891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.578 [2024-11-18 08:10:03.516920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.578 qpair failed and we were unable to recover it. 00:36:10.578 [2024-11-18 08:10:03.526753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.578 [2024-11-18 08:10:03.526838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.578 [2024-11-18 08:10:03.526863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.578 [2024-11-18 08:10:03.526877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.578 [2024-11-18 08:10:03.526889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af8000b90 00:36:10.578 [2024-11-18 08:10:03.526919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.578 qpair failed and we were unable to recover it. 00:36:10.578 [2024-11-18 08:10:03.536817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.578 [2024-11-18 08:10:03.536932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.578 [2024-11-18 08:10:03.536966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.578 [2024-11-18 08:10:03.536984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.578 [2024-11-18 08:10:03.536996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7b00000b90 00:36:10.578 [2024-11-18 08:10:03.537027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.578 qpair failed and we were unable to recover it. 00:36:10.578 [2024-11-18 08:10:03.546801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.578 [2024-11-18 08:10:03.546891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.578 [2024-11-18 08:10:03.546918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.578 [2024-11-18 08:10:03.546933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.578 [2024-11-18 08:10:03.546945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7b00000b90 00:36:10.578 [2024-11-18 08:10:03.546976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.578 qpair failed and we were unable to recover it. 00:36:10.578 [2024-11-18 08:10:03.556847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.578 [2024-11-18 08:10:03.556945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.578 [2024-11-18 08:10:03.556977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.578 [2024-11-18 08:10:03.556994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.578 [2024-11-18 08:10:03.557006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x160f690 00:36:10.578 [2024-11-18 08:10:03.557037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:10.578 qpair failed and we were unable to recover it. 00:36:10.578 [2024-11-18 08:10:03.566866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.578 [2024-11-18 08:10:03.567001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.578 [2024-11-18 08:10:03.567029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.578 [2024-11-18 08:10:03.567043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.578 [2024-11-18 08:10:03.567055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x160f690 00:36:10.578 [2024-11-18 08:10:03.567084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:10.578 qpair failed and we were unable to recover it. 00:36:10.578 [2024-11-18 08:10:03.576886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.578 [2024-11-18 08:10:03.576977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.578 [2024-11-18 08:10:03.577011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.578 [2024-11-18 08:10:03.577033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.578 [2024-11-18 08:10:03.577047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af4000b90 00:36:10.578 [2024-11-18 08:10:03.577078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.578 qpair failed and we were unable to recover it. 00:36:10.578 [2024-11-18 08:10:03.586918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.578 [2024-11-18 08:10:03.587002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.578 [2024-11-18 08:10:03.587030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.579 [2024-11-18 08:10:03.587044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.579 [2024-11-18 08:10:03.587056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7af4000b90 00:36:10.579 [2024-11-18 08:10:03.587100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.579 qpair failed and we were unable to recover it. 00:36:10.579 [2024-11-18 08:10:03.587211] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:36:10.579 A controller has encountered a failure and is being reset. 00:36:10.579 Controller properly reset. 00:36:10.579 Initializing NVMe Controllers 00:36:10.579 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:10.579 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:10.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:10.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:10.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:10.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:10.579 Initialization complete. Launching workers. 00:36:10.579 Starting thread on core 1 00:36:10.579 Starting thread on core 2 00:36:10.579 Starting thread on core 3 00:36:10.579 Starting thread on core 0 00:36:10.579 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:10.579 00:36:10.579 real 0m10.715s 00:36:10.579 user 0m19.118s 00:36:10.579 sys 0m5.222s 00:36:10.579 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:10.579 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:10.579 ************************************ 00:36:10.579 END TEST nvmf_target_disconnect_tc2 00:36:10.579 ************************************ 00:36:10.579 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:10.579 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:10.579 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:10.579 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:10.579 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:10.579 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:10.579 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:10.579 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:10.579 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:10.579 rmmod nvme_tcp 00:36:10.579 rmmod nvme_fabrics 00:36:10.836 rmmod nvme_keyring 00:36:10.836 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:10.836 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:10.836 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:10.836 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 896763 ']' 00:36:10.836 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 896763 00:36:10.836 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 896763 ']' 00:36:10.836 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 896763 00:36:10.836 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:36:10.836 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:10.836 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 896763 00:36:10.836 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:36:10.836 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:36:10.836 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 896763' 00:36:10.836 killing process with pid 896763 00:36:10.836 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 896763 00:36:10.836 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 896763 00:36:11.095 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:11.095 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:11.095 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:11.095 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:11.095 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:36:11.095 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:11.095 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:36:11.095 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:11.095 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:11.095 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:11.095 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:11.095 08:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:13.001 08:10:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:13.001 00:36:13.001 real 0m15.693s 00:36:13.001 user 0m45.303s 00:36:13.001 sys 0m7.338s 00:36:13.001 08:10:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:13.001 08:10:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:13.001 ************************************ 00:36:13.001 END TEST nvmf_target_disconnect 00:36:13.001 ************************************ 00:36:13.001 08:10:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:13.001 00:36:13.001 real 6m41.848s 00:36:13.001 user 17m14.726s 00:36:13.001 sys 1m26.653s 00:36:13.001 08:10:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:13.001 08:10:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.001 ************************************ 00:36:13.001 END TEST nvmf_host 00:36:13.001 ************************************ 00:36:13.001 08:10:06 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:13.001 08:10:06 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:13.001 08:10:06 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:13.001 08:10:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:13.001 08:10:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:13.001 08:10:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:13.001 ************************************ 00:36:13.001 START TEST nvmf_target_core_interrupt_mode 00:36:13.001 ************************************ 00:36:13.001 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:13.260 * Looking for test storage... 00:36:13.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:13.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:13.260 --rc genhtml_branch_coverage=1 00:36:13.260 --rc genhtml_function_coverage=1 00:36:13.260 --rc genhtml_legend=1 00:36:13.260 --rc geninfo_all_blocks=1 00:36:13.260 --rc geninfo_unexecuted_blocks=1 00:36:13.260 00:36:13.260 ' 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:13.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:13.260 --rc genhtml_branch_coverage=1 00:36:13.260 --rc genhtml_function_coverage=1 00:36:13.260 --rc genhtml_legend=1 00:36:13.260 --rc geninfo_all_blocks=1 00:36:13.260 --rc geninfo_unexecuted_blocks=1 00:36:13.260 00:36:13.260 ' 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:13.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:13.260 --rc genhtml_branch_coverage=1 00:36:13.260 --rc genhtml_function_coverage=1 00:36:13.260 --rc genhtml_legend=1 00:36:13.260 --rc geninfo_all_blocks=1 00:36:13.260 --rc geninfo_unexecuted_blocks=1 00:36:13.260 00:36:13.260 ' 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:13.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:13.260 --rc genhtml_branch_coverage=1 00:36:13.260 --rc genhtml_function_coverage=1 00:36:13.260 --rc genhtml_legend=1 00:36:13.260 --rc geninfo_all_blocks=1 00:36:13.260 --rc geninfo_unexecuted_blocks=1 00:36:13.260 00:36:13.260 ' 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:13.260 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:13.261 ************************************ 00:36:13.261 START TEST nvmf_abort 00:36:13.261 ************************************ 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:13.261 * Looking for test storage... 00:36:13.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:36:13.261 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:13.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:13.520 --rc genhtml_branch_coverage=1 00:36:13.520 --rc genhtml_function_coverage=1 00:36:13.520 --rc genhtml_legend=1 00:36:13.520 --rc geninfo_all_blocks=1 00:36:13.520 --rc geninfo_unexecuted_blocks=1 00:36:13.520 00:36:13.520 ' 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:13.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:13.520 --rc genhtml_branch_coverage=1 00:36:13.520 --rc genhtml_function_coverage=1 00:36:13.520 --rc genhtml_legend=1 00:36:13.520 --rc geninfo_all_blocks=1 00:36:13.520 --rc geninfo_unexecuted_blocks=1 00:36:13.520 00:36:13.520 ' 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:13.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:13.520 --rc genhtml_branch_coverage=1 00:36:13.520 --rc genhtml_function_coverage=1 00:36:13.520 --rc genhtml_legend=1 00:36:13.520 --rc geninfo_all_blocks=1 00:36:13.520 --rc geninfo_unexecuted_blocks=1 00:36:13.520 00:36:13.520 ' 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:13.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:13.520 --rc genhtml_branch_coverage=1 00:36:13.520 --rc genhtml_function_coverage=1 00:36:13.520 --rc genhtml_legend=1 00:36:13.520 --rc geninfo_all_blocks=1 00:36:13.520 --rc geninfo_unexecuted_blocks=1 00:36:13.520 00:36:13.520 ' 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:13.520 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:13.521 08:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:15.427 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:15.427 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:15.427 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:15.427 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:15.427 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:15.428 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:15.428 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:15.428 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:36:15.428 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:15.428 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:15.428 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:15.428 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:15.428 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:15.428 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:15.428 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:15.428 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:15.428 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:15.428 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:15.428 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:15.428 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:15.428 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:15.428 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:15.428 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:15.428 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:15.428 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:15.428 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:15.687 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:15.687 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:15.687 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:15.687 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:15.687 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:15.687 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:15.687 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:15.687 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:15.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:15.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:36:15.688 00:36:15.688 --- 10.0.0.2 ping statistics --- 00:36:15.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.688 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:15.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:15.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:36:15.688 00:36:15.688 --- 10.0.0.1 ping statistics --- 00:36:15.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.688 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=899568 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 899568 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 899568 ']' 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:15.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:15.688 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.688 [2024-11-18 08:10:08.698234] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:15.688 [2024-11-18 08:10:08.699328] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:36:15.688 [2024-11-18 08:10:08.699400] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:15.688 [2024-11-18 08:10:08.773009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:15.947 [2024-11-18 08:10:08.820135] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:15.947 [2024-11-18 08:10:08.820190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:15.947 [2024-11-18 08:10:08.820211] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:15.947 [2024-11-18 08:10:08.820230] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:15.947 [2024-11-18 08:10:08.820244] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:15.947 [2024-11-18 08:10:08.821752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:15.947 [2024-11-18 08:10:08.821821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:15.947 [2024-11-18 08:10:08.821824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:15.947 [2024-11-18 08:10:08.903943] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:15.947 [2024-11-18 08:10:08.904162] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:15.947 [2024-11-18 08:10:08.904170] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:15.947 [2024-11-18 08:10:08.904455] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:15.947 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:15.947 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:36:15.947 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:15.947 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:15.947 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.947 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:15.947 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:15.947 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.947 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.947 [2024-11-18 08:10:08.962562] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:15.947 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.947 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:15.947 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.947 08:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.947 Malloc0 00:36:15.947 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.947 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:15.947 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.948 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.948 Delay0 00:36:15.948 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.948 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:15.948 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.948 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.948 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.948 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:15.948 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.948 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.948 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.948 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:15.948 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.948 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.948 [2024-11-18 08:10:09.030732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:15.948 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.948 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:15.948 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.948 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:16.206 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.206 08:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:16.206 [2024-11-18 08:10:09.132434] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:18.105 Initializing NVMe Controllers 00:36:18.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:18.105 controller IO queue size 128 less than required 00:36:18.105 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:18.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:18.105 Initialization complete. Launching workers. 00:36:18.105 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28937 00:36:18.105 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28998, failed to submit 66 00:36:18.105 success 28937, unsuccessful 61, failed 0 00:36:18.105 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:18.105 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.105 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:18.105 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.105 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:18.105 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:18.105 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:18.105 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:18.105 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:18.105 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:18.105 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:18.105 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:18.105 rmmod nvme_tcp 00:36:18.105 rmmod nvme_fabrics 00:36:18.363 rmmod nvme_keyring 00:36:18.363 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:18.363 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:18.363 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:18.363 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 899568 ']' 00:36:18.363 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 899568 00:36:18.363 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 899568 ']' 00:36:18.363 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 899568 00:36:18.363 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:36:18.363 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:18.363 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 899568 00:36:18.363 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:18.363 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:18.363 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 899568' 00:36:18.363 killing process with pid 899568 00:36:18.363 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 899568 00:36:18.363 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 899568 00:36:18.623 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:18.623 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:18.623 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:18.623 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:18.623 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:36:18.623 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:18.623 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:36:18.623 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:18.623 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:18.623 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:18.623 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:18.623 08:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:20.528 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:20.528 00:36:20.528 real 0m7.275s 00:36:20.528 user 0m9.209s 00:36:20.528 sys 0m2.820s 00:36:20.528 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:20.528 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.528 ************************************ 00:36:20.528 END TEST nvmf_abort 00:36:20.528 ************************************ 00:36:20.528 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:20.528 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:20.528 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:20.528 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:20.528 ************************************ 00:36:20.528 START TEST nvmf_ns_hotplug_stress 00:36:20.528 ************************************ 00:36:20.528 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:20.788 * Looking for test storage... 00:36:20.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:20.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.788 --rc genhtml_branch_coverage=1 00:36:20.788 --rc genhtml_function_coverage=1 00:36:20.788 --rc genhtml_legend=1 00:36:20.788 --rc geninfo_all_blocks=1 00:36:20.788 --rc geninfo_unexecuted_blocks=1 00:36:20.788 00:36:20.788 ' 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:20.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.788 --rc genhtml_branch_coverage=1 00:36:20.788 --rc genhtml_function_coverage=1 00:36:20.788 --rc genhtml_legend=1 00:36:20.788 --rc geninfo_all_blocks=1 00:36:20.788 --rc geninfo_unexecuted_blocks=1 00:36:20.788 00:36:20.788 ' 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:20.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.788 --rc genhtml_branch_coverage=1 00:36:20.788 --rc genhtml_function_coverage=1 00:36:20.788 --rc genhtml_legend=1 00:36:20.788 --rc geninfo_all_blocks=1 00:36:20.788 --rc geninfo_unexecuted_blocks=1 00:36:20.788 00:36:20.788 ' 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:20.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.788 --rc genhtml_branch_coverage=1 00:36:20.788 --rc genhtml_function_coverage=1 00:36:20.788 --rc genhtml_legend=1 00:36:20.788 --rc geninfo_all_blocks=1 00:36:20.788 --rc geninfo_unexecuted_blocks=1 00:36:20.788 00:36:20.788 ' 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.788 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:20.789 08:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:23.321 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:23.322 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:23.322 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:23.322 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:23.322 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:23.322 08:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:23.322 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:23.322 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:23.322 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:23.322 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:23.322 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:23.322 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:23.322 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:23.322 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:23.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:23.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:36:23.322 00:36:23.322 --- 10.0.0.2 ping statistics --- 00:36:23.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:23.322 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:36:23.322 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:23.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:23.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:36:23.322 00:36:23.322 --- 10.0.0.1 ping statistics --- 00:36:23.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:23.322 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:36:23.322 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:23.322 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:36:23.322 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:23.322 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:23.322 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:23.322 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:23.322 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:23.322 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:23.322 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:23.322 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:23.322 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:23.322 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:23.323 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:23.323 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=901787 00:36:23.323 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:23.323 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 901787 00:36:23.323 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 901787 ']' 00:36:23.323 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:23.323 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:23.323 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:23.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:23.323 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:23.323 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:23.323 [2024-11-18 08:10:16.240864] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:23.323 [2024-11-18 08:10:16.241967] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:36:23.323 [2024-11-18 08:10:16.242028] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:23.323 [2024-11-18 08:10:16.316097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:23.323 [2024-11-18 08:10:16.361872] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:23.323 [2024-11-18 08:10:16.361923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:23.323 [2024-11-18 08:10:16.361943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:23.323 [2024-11-18 08:10:16.361960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:23.323 [2024-11-18 08:10:16.361975] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:23.323 [2024-11-18 08:10:16.363341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:23.323 [2024-11-18 08:10:16.363457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:23.323 [2024-11-18 08:10:16.363448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:23.581 [2024-11-18 08:10:16.448409] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:23.581 [2024-11-18 08:10:16.448643] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:23.581 [2024-11-18 08:10:16.448653] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:23.581 [2024-11-18 08:10:16.448980] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:23.581 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:23.581 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:36:23.581 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:23.581 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:23.581 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:23.581 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:23.581 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:23.581 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:23.839 [2024-11-18 08:10:16.768267] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:23.839 08:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:24.096 08:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:24.353 [2024-11-18 08:10:17.312519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:24.353 08:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:24.611 08:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:24.870 Malloc0 00:36:24.870 08:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:25.128 Delay0 00:36:25.128 08:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:25.385 08:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:25.951 NULL1 00:36:25.951 08:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:25.951 08:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=902203 00:36:25.951 08:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:25.951 08:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:25.951 08:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:26.210 08:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:26.775 08:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:26.775 08:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:26.775 true 00:36:26.775 08:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:26.775 08:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.341 08:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:27.341 08:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:27.341 08:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:27.599 true 00:36:27.857 08:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:27.857 08:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:28.115 08:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:28.383 08:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:28.383 08:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:28.731 true 00:36:28.731 08:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:28.731 08:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.665 Read completed with error (sct=0, sc=11) 00:36:29.665 08:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:29.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:29.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:29.665 08:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:29.665 08:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:29.923 true 00:36:30.181 08:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:30.181 08:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.438 08:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:30.696 08:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:30.696 08:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:30.954 true 00:36:30.954 08:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:30.954 08:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.212 08:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:31.471 08:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:31.471 08:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:31.728 true 00:36:31.728 08:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:31.728 08:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:32.659 08:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:32.916 08:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:32.916 08:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:33.174 true 00:36:33.174 08:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:33.174 08:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.432 08:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:33.689 08:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:33.689 08:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:33.947 true 00:36:33.947 08:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:33.947 08:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:34.205 08:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:34.464 08:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:34.464 08:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:34.722 true 00:36:34.722 08:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:34.722 08:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.655 08:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:35.914 08:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:35.914 08:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:36.172 true 00:36:36.172 08:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:36.172 08:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:36.429 08:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:36.687 08:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:36.687 08:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:36.945 true 00:36:37.203 08:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:37.203 08:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.461 08:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:37.719 08:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:37.719 08:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:37.976 true 00:36:37.976 08:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:37.976 08:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:38.909 08:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.167 08:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:39.167 08:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:39.425 true 00:36:39.425 08:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:39.425 08:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:39.683 08:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.940 08:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:39.940 08:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:40.198 true 00:36:40.198 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:40.198 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:40.456 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:40.714 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:40.714 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:40.971 true 00:36:40.971 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:40.971 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:41.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.904 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.162 08:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:42.162 08:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:42.419 true 00:36:42.419 08:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:42.419 08:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.677 08:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.935 08:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:42.935 08:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:43.192 true 00:36:43.450 08:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:43.450 08:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.707 08:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.965 08:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:43.965 08:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:44.222 true 00:36:44.223 08:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:44.223 08:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.155 08:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:45.412 08:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:45.412 08:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:45.670 true 00:36:45.670 08:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:45.670 08:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.928 08:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.186 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:46.186 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:46.443 true 00:36:46.443 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:46.443 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.703 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.269 08:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:47.269 08:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:47.269 true 00:36:47.527 08:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:47.527 08:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.461 08:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:48.461 08:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:48.461 08:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:48.719 true 00:36:48.719 08:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:48.719 08:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.976 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:49.233 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:49.233 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:49.491 true 00:36:49.491 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:49.748 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.006 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:50.263 08:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:50.263 08:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:50.521 true 00:36:50.521 08:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:50.521 08:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.453 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:51.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.711 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:51.711 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:51.969 true 00:36:51.969 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:51.969 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.227 08:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:52.484 08:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:52.484 08:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:52.741 true 00:36:52.741 08:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:52.741 08:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.674 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:53.674 08:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:53.955 08:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:53.955 08:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:54.214 true 00:36:54.214 08:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:54.214 08:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.492 08:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:54.750 08:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:36:54.750 08:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:36:55.007 true 00:36:55.007 08:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:55.007 08:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:55.940 08:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:55.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:55.941 08:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:36:55.941 08:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:36:56.204 true 00:36:56.204 08:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:56.204 08:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:56.468 Initializing NVMe Controllers 00:36:56.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:56.468 Controller IO queue size 128, less than required. 00:36:56.468 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:56.468 Controller IO queue size 128, less than required. 00:36:56.468 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:56.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:56.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:56.468 Initialization complete. Launching workers. 00:36:56.468 ======================================================== 00:36:56.468 Latency(us) 00:36:56.468 Device Information : IOPS MiB/s Average min max 00:36:56.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 435.93 0.21 108752.97 3381.68 1013613.89 00:36:56.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7800.85 3.81 16359.99 3257.18 542761.31 00:36:56.468 ======================================================== 00:36:56.468 Total : 8236.78 4.02 21249.89 3257.18 1013613.89 00:36:56.468 00:36:56.468 08:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:56.725 08:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:36:56.725 08:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:36:56.982 true 00:36:57.240 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 902203 00:36:57.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (902203) - No such process 00:36:57.240 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 902203 00:36:57.240 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:57.497 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:57.755 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:36:57.755 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:36:57.755 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:36:57.755 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:57.755 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:36:58.014 null0 00:36:58.014 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:58.014 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:58.014 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:36:58.272 null1 00:36:58.272 08:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:58.272 08:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:58.272 08:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:36:58.530 null2 00:36:58.530 08:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:58.530 08:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:58.530 08:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:36:58.789 null3 00:36:58.789 08:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:58.789 08:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:58.789 08:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:36:59.047 null4 00:36:59.047 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:59.047 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:59.047 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:36:59.305 null5 00:36:59.305 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:59.305 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:59.305 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:36:59.564 null6 00:36:59.564 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:59.564 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:59.564 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:36:59.822 null7 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:59.822 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 906221 906222 906223 906225 906228 906230 906232 906234 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.823 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:00.081 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:00.081 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:00.340 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:00.340 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:00.340 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:00.340 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:00.340 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:00.340 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.598 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:00.857 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:00.857 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:00.857 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:00.857 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:00.857 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:00.857 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:00.857 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:00.857 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.116 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:01.375 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:01.375 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:01.375 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:01.375 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:01.375 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:01.375 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:01.375 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:01.375 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.634 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:01.893 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:01.893 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:01.893 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:01.893 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:01.893 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:02.151 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:02.151 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:02.151 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:02.409 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.409 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.409 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:02.409 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.409 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.409 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:02.409 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.409 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.409 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:02.409 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.409 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.409 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:02.409 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.409 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.409 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:02.409 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.409 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.409 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:02.409 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.410 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.410 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.410 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.410 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:02.410 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:02.668 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.668 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:02.668 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:02.668 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:02.668 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:02.668 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:02.668 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:02.668 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.926 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:03.183 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:03.183 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:03.184 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:03.184 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:03.184 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:03.184 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:03.184 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:03.184 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:03.442 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:03.702 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:03.702 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:03.702 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:03.960 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:03.960 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:03.960 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:03.960 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:03.960 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:04.218 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.218 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.218 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:04.218 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.218 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.218 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:04.218 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.218 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.218 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:04.218 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.218 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.218 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:04.218 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.218 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.218 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:04.218 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.218 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.218 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:04.218 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.218 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.219 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:04.219 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.219 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.219 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:04.484 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:04.484 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:04.484 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:04.484 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:04.484 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:04.484 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:04.484 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:04.484 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:04.748 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.748 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.748 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:04.748 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.748 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.748 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:04.748 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.748 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.748 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:04.748 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.748 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.748 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:04.748 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.748 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.748 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.748 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:04.748 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.749 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:04.749 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.749 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.749 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:04.749 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.749 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.749 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:05.007 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:05.007 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:05.007 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:05.007 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:05.007 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:05.007 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:05.007 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:05.007 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:05.265 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.266 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:05.524 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:05.524 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:05.524 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:05.524 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:05.524 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:05.524 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:05.524 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:05.524 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:06.091 rmmod nvme_tcp 00:37:06.091 rmmod nvme_fabrics 00:37:06.091 rmmod nvme_keyring 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 901787 ']' 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 901787 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 901787 ']' 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 901787 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:06.091 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 901787 00:37:06.091 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:06.091 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:06.091 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 901787' 00:37:06.091 killing process with pid 901787 00:37:06.091 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 901787 00:37:06.091 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 901787 00:37:06.359 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:06.359 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:06.359 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:06.359 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:37:06.359 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:37:06.359 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:06.359 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:37:06.359 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:06.359 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:06.359 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:06.359 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:06.359 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:08.273 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:08.273 00:37:08.273 real 0m47.699s 00:37:08.273 user 3m19.481s 00:37:08.273 sys 0m22.199s 00:37:08.273 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:08.273 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:08.273 ************************************ 00:37:08.273 END TEST nvmf_ns_hotplug_stress 00:37:08.273 ************************************ 00:37:08.273 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:08.273 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:08.273 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:08.273 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:08.273 ************************************ 00:37:08.273 START TEST nvmf_delete_subsystem 00:37:08.273 ************************************ 00:37:08.273 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:08.534 * Looking for test storage... 00:37:08.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:08.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.534 --rc genhtml_branch_coverage=1 00:37:08.534 --rc genhtml_function_coverage=1 00:37:08.534 --rc genhtml_legend=1 00:37:08.534 --rc geninfo_all_blocks=1 00:37:08.534 --rc geninfo_unexecuted_blocks=1 00:37:08.534 00:37:08.534 ' 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:08.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.534 --rc genhtml_branch_coverage=1 00:37:08.534 --rc genhtml_function_coverage=1 00:37:08.534 --rc genhtml_legend=1 00:37:08.534 --rc geninfo_all_blocks=1 00:37:08.534 --rc geninfo_unexecuted_blocks=1 00:37:08.534 00:37:08.534 ' 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:08.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.534 --rc genhtml_branch_coverage=1 00:37:08.534 --rc genhtml_function_coverage=1 00:37:08.534 --rc genhtml_legend=1 00:37:08.534 --rc geninfo_all_blocks=1 00:37:08.534 --rc geninfo_unexecuted_blocks=1 00:37:08.534 00:37:08.534 ' 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:08.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.534 --rc genhtml_branch_coverage=1 00:37:08.534 --rc genhtml_function_coverage=1 00:37:08.534 --rc genhtml_legend=1 00:37:08.534 --rc geninfo_all_blocks=1 00:37:08.534 --rc geninfo_unexecuted_blocks=1 00:37:08.534 00:37:08.534 ' 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.534 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:08.535 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:11.074 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:11.074 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:11.074 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:11.074 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:11.074 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:11.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:11.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:37:11.075 00:37:11.075 --- 10.0.0.2 ping statistics --- 00:37:11.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:11.075 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:11.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:11.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:37:11.075 00:37:11.075 --- 10.0.0.1 ping statistics --- 00:37:11.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:11.075 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=909095 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 909095 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 909095 ']' 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:11.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:11.075 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:11.075 [2024-11-18 08:11:03.922607] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:11.075 [2024-11-18 08:11:03.923636] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:37:11.075 [2024-11-18 08:11:03.923693] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:11.075 [2024-11-18 08:11:03.995764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:11.075 [2024-11-18 08:11:04.037834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:11.075 [2024-11-18 08:11:04.037893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:11.075 [2024-11-18 08:11:04.037928] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:11.075 [2024-11-18 08:11:04.037940] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:11.075 [2024-11-18 08:11:04.037950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:11.075 [2024-11-18 08:11:04.039306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:11.075 [2024-11-18 08:11:04.039311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.075 [2024-11-18 08:11:04.118617] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:11.075 [2024-11-18 08:11:04.118667] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:11.075 [2024-11-18 08:11:04.118905] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:11.075 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:11.075 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:37:11.075 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:11.075 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:11.075 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:11.334 [2024-11-18 08:11:04.172076] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:11.334 [2024-11-18 08:11:04.192309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:11.334 NULL1 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:11.334 Delay0 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=909121 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:11.334 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:11.334 [2024-11-18 08:11:04.273948] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:13.234 08:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:13.234 08:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.234 08:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:13.492 Write completed with error (sct=0, sc=8) 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 starting I/O failed: -6 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 Write completed with error (sct=0, sc=8) 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 starting I/O failed: -6 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 Write completed with error (sct=0, sc=8) 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 starting I/O failed: -6 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 Write completed with error (sct=0, sc=8) 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 Write completed with error (sct=0, sc=8) 00:37:13.492 starting I/O failed: -6 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 starting I/O failed: -6 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 starting I/O failed: -6 00:37:13.492 Write completed with error (sct=0, sc=8) 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 Write completed with error (sct=0, sc=8) 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 starting I/O failed: -6 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 Write completed with error (sct=0, sc=8) 00:37:13.492 Write completed with error (sct=0, sc=8) 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 starting I/O failed: -6 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 Write completed with error (sct=0, sc=8) 00:37:13.492 Read completed with error (sct=0, sc=8) 00:37:13.492 Write completed with error (sct=0, sc=8) 00:37:13.492 starting I/O failed: -6 00:37:13.492 Write completed with error (sct=0, sc=8) 00:37:13.492 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 starting I/O failed: -6 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 starting I/O failed: -6 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 starting I/O failed: -6 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 [2024-11-18 08:11:06.398072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd9d400d4b0 is same with the state(6) to be set 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 starting I/O failed: -6 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 starting I/O failed: -6 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 starting I/O failed: -6 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 starting I/O failed: -6 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 starting I/O failed: -6 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 starting I/O failed: -6 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 starting I/O failed: -6 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 starting I/O failed: -6 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 starting I/O failed: -6 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 [2024-11-18 08:11:06.398784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e6f70 is same with the state(6) to be set 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Write completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:13.493 Read completed with error (sct=0, sc=8) 00:37:14.427 [2024-11-18 08:11:07.369512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5190 is same with the state(6) to be set 00:37:14.427 Read completed with error (sct=0, sc=8) 00:37:14.427 Read completed with error (sct=0, sc=8) 00:37:14.427 Read completed with error (sct=0, sc=8) 00:37:14.427 Write completed with error (sct=0, sc=8) 00:37:14.427 Read completed with error (sct=0, sc=8) 00:37:14.427 Read completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 [2024-11-18 08:11:07.402906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd9d400d020 is same with the state(6) to be set 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 [2024-11-18 08:11:07.403069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e7330 is same with the state(6) to be set 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 [2024-11-18 08:11:07.403277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd9d4000c40 is same with the state(6) to be set 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Write completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 Read completed with error (sct=0, sc=8) 00:37:14.428 [2024-11-18 08:11:07.403701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd9d400d7e0 is same with the state(6) to be set 00:37:14.428 Initializing NVMe Controllers 00:37:14.428 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:14.428 Controller IO queue size 128, less than required. 00:37:14.428 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:14.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:14.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:14.428 Initialization complete. Launching workers. 00:37:14.428 ======================================================== 00:37:14.428 Latency(us) 00:37:14.428 Device Information : IOPS MiB/s Average min max 00:37:14.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 147.36 0.07 929347.69 308.26 2003483.20 00:37:14.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 178.62 0.09 958715.13 1646.07 1013563.36 00:37:14.428 ======================================================== 00:37:14.428 Total : 325.98 0.16 945439.44 308.26 2003483.20 00:37:14.428 00:37:14.428 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.428 [2024-11-18 08:11:07.404522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e5190 (9): Bad file descriptor 00:37:14.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:14.428 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:14.428 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 909121 00:37:14.428 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:14.995 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:14.995 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 909121 00:37:14.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (909121) - No such process 00:37:14.995 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 909121 00:37:14.995 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:37:14.995 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 909121 00:37:14.995 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:37:14.995 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:14.995 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:37:14.995 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:14.995 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 909121 00:37:14.995 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:37:14.995 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:14.995 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:14.995 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:14.995 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:14.995 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.995 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:14.995 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.995 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:14.996 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.996 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:14.996 [2024-11-18 08:11:07.924280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:14.996 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.996 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:14.996 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.996 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:14.996 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.996 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=909522 00:37:14.996 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:14.996 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 909522 00:37:14.996 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:14.996 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:14.996 [2024-11-18 08:11:07.988883] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:15.562 08:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:15.562 08:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 909522 00:37:15.562 08:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:16.128 08:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:16.128 08:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 909522 00:37:16.128 08:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:16.385 08:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:16.385 08:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 909522 00:37:16.385 08:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:16.949 08:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:16.950 08:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 909522 00:37:16.950 08:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:17.515 08:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:17.515 08:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 909522 00:37:17.515 08:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:18.080 08:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:18.080 08:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 909522 00:37:18.080 08:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:18.080 Initializing NVMe Controllers 00:37:18.080 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:18.080 Controller IO queue size 128, less than required. 00:37:18.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:18.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:18.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:18.081 Initialization complete. Launching workers. 00:37:18.081 ======================================================== 00:37:18.081 Latency(us) 00:37:18.081 Device Information : IOPS MiB/s Average min max 00:37:18.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003578.70 1000236.91 1044043.05 00:37:18.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006278.42 1000191.39 1040850.12 00:37:18.081 ======================================================== 00:37:18.081 Total : 256.00 0.12 1004928.56 1000191.39 1044043.05 00:37:18.081 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 909522 00:37:18.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (909522) - No such process 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 909522 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:18.647 rmmod nvme_tcp 00:37:18.647 rmmod nvme_fabrics 00:37:18.647 rmmod nvme_keyring 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 909095 ']' 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 909095 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 909095 ']' 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 909095 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 909095 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 909095' 00:37:18.647 killing process with pid 909095 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 909095 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 909095 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:18.647 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:21.237 00:37:21.237 real 0m12.446s 00:37:21.237 user 0m24.613s 00:37:21.237 sys 0m3.798s 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:21.237 ************************************ 00:37:21.237 END TEST nvmf_delete_subsystem 00:37:21.237 ************************************ 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:21.237 ************************************ 00:37:21.237 START TEST nvmf_host_management 00:37:21.237 ************************************ 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:21.237 * Looking for test storage... 00:37:21.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:21.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.237 --rc genhtml_branch_coverage=1 00:37:21.237 --rc genhtml_function_coverage=1 00:37:21.237 --rc genhtml_legend=1 00:37:21.237 --rc geninfo_all_blocks=1 00:37:21.237 --rc geninfo_unexecuted_blocks=1 00:37:21.237 00:37:21.237 ' 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:21.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.237 --rc genhtml_branch_coverage=1 00:37:21.237 --rc genhtml_function_coverage=1 00:37:21.237 --rc genhtml_legend=1 00:37:21.237 --rc geninfo_all_blocks=1 00:37:21.237 --rc geninfo_unexecuted_blocks=1 00:37:21.237 00:37:21.237 ' 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:21.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.237 --rc genhtml_branch_coverage=1 00:37:21.237 --rc genhtml_function_coverage=1 00:37:21.237 --rc genhtml_legend=1 00:37:21.237 --rc geninfo_all_blocks=1 00:37:21.237 --rc geninfo_unexecuted_blocks=1 00:37:21.237 00:37:21.237 ' 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:21.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.237 --rc genhtml_branch_coverage=1 00:37:21.237 --rc genhtml_function_coverage=1 00:37:21.237 --rc genhtml_legend=1 00:37:21.237 --rc geninfo_all_blocks=1 00:37:21.237 --rc geninfo_unexecuted_blocks=1 00:37:21.237 00:37:21.237 ' 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:21.237 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:21.238 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:23.141 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:23.141 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:23.141 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:23.141 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:23.141 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:23.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:23.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:37:23.142 00:37:23.142 --- 10.0.0.2 ping statistics --- 00:37:23.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:23.142 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:23.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:23.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:37:23.142 00:37:23.142 --- 10.0.0.1 ping statistics --- 00:37:23.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:23.142 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:23.142 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:23.401 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:23.401 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:23.401 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:23.401 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:23.401 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:23.401 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.401 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=911972 00:37:23.401 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:23.401 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 911972 00:37:23.401 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 911972 ']' 00:37:23.401 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:23.401 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:23.401 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:23.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:23.401 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:23.401 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.401 [2024-11-18 08:11:16.293698] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:23.401 [2024-11-18 08:11:16.294770] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:37:23.401 [2024-11-18 08:11:16.294837] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:23.401 [2024-11-18 08:11:16.370423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:23.401 [2024-11-18 08:11:16.420162] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:23.401 [2024-11-18 08:11:16.420221] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:23.401 [2024-11-18 08:11:16.420250] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:23.401 [2024-11-18 08:11:16.420261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:23.401 [2024-11-18 08:11:16.420271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:23.401 [2024-11-18 08:11:16.421961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:23.401 [2024-11-18 08:11:16.422023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:23.401 [2024-11-18 08:11:16.422045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:23.401 [2024-11-18 08:11:16.422049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:23.660 [2024-11-18 08:11:16.507121] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:23.660 [2024-11-18 08:11:16.507338] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:23.660 [2024-11-18 08:11:16.507667] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:23.660 [2024-11-18 08:11:16.508294] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:23.660 [2024-11-18 08:11:16.508553] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.660 [2024-11-18 08:11:16.562715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.660 Malloc0 00:37:23.660 [2024-11-18 08:11:16.643016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=912019 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 912019 /var/tmp/bdevperf.sock 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 912019 ']' 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:23.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:23.660 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:23.661 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:23.661 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.661 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:23.661 { 00:37:23.661 "params": { 00:37:23.661 "name": "Nvme$subsystem", 00:37:23.661 "trtype": "$TEST_TRANSPORT", 00:37:23.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:23.661 "adrfam": "ipv4", 00:37:23.661 "trsvcid": "$NVMF_PORT", 00:37:23.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:23.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:23.661 "hdgst": ${hdgst:-false}, 00:37:23.661 "ddgst": ${ddgst:-false} 00:37:23.661 }, 00:37:23.661 "method": "bdev_nvme_attach_controller" 00:37:23.661 } 00:37:23.661 EOF 00:37:23.661 )") 00:37:23.661 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:23.661 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:23.661 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:23.661 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:23.661 "params": { 00:37:23.661 "name": "Nvme0", 00:37:23.661 "trtype": "tcp", 00:37:23.661 "traddr": "10.0.0.2", 00:37:23.661 "adrfam": "ipv4", 00:37:23.661 "trsvcid": "4420", 00:37:23.661 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:23.661 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:23.661 "hdgst": false, 00:37:23.661 "ddgst": false 00:37:23.661 }, 00:37:23.661 "method": "bdev_nvme_attach_controller" 00:37:23.661 }' 00:37:23.661 [2024-11-18 08:11:16.727957] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:37:23.661 [2024-11-18 08:11:16.728033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid912019 ] 00:37:23.919 [2024-11-18 08:11:16.798696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.919 [2024-11-18 08:11:16.846263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:24.177 Running I/O for 10 seconds... 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:37:24.177 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:37:24.436 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:37:24.436 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:24.436 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:24.436 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.436 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:24.436 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:24.436 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.436 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:37:24.436 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:37:24.436 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:24.436 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:24.437 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:24.437 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:24.437 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.437 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:24.437 [2024-11-18 08:11:17.462158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:24.437 [2024-11-18 08:11:17.462226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.462245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:24.437 [2024-11-18 08:11:17.462259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.462285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:24.437 [2024-11-18 08:11:17.462300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.462314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:24.437 [2024-11-18 08:11:17.462327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.462341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202e970 is same with the state(6) to be set 00:37:24.437 [2024-11-18 08:11:17.462733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.462758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.462794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.462810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.462826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.462841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.462859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.462873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.462889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.462903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.462919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.462933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.462948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.462963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.462979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.462993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.463008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.463022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.463037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.463051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.463072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.463086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.463102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.463116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.463131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.463145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.463160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.463174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.463190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.463204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.463219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.463233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.463248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.463262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.463278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.463292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.463308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.463322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.463337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.463351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.463366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.437 [2024-11-18 08:11:17.463381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.463396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.463410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.463428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.463443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.463458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.463474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.463498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.463515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.463531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.463545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.463564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.437 [2024-11-18 08:11:17.463577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.437 [2024-11-18 08:11:17.463593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.463613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.463629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:24.438 [2024-11-18 08:11:17.463644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.463659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.463673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.463689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.463703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.463718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.463732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.463747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.463761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.463777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.463792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.463816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.463830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.438 [2024-11-18 08:11:17.463846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.463868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.463884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.463898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.463913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.463928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.463943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.463958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.463973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.463987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.464003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.464017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:24.438 [2024-11-18 08:11:17.464033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.464049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.464065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.464079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.464094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.464114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.464130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.464144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.464159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.464177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.464194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.464208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.464223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.464238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.464253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.464268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.464283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.464297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.464313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.464327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.464343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.464357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.464372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.464386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.464401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.464416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.464431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.464445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.464460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.464474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.464497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.464513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.464529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.438 [2024-11-18 08:11:17.464543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.438 [2024-11-18 08:11:17.464567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.439 [2024-11-18 08:11:17.464581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.439 [2024-11-18 08:11:17.464597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.439 [2024-11-18 08:11:17.464611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.439 [2024-11-18 08:11:17.464626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.439 [2024-11-18 08:11:17.464640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.439 [2024-11-18 08:11:17.464655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.439 [2024-11-18 08:11:17.464669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.439 [2024-11-18 08:11:17.464684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.439 [2024-11-18 08:11:17.464697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.439 [2024-11-18 08:11:17.464712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.439 [2024-11-18 08:11:17.464726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.439 [2024-11-18 08:11:17.465942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:24.439 task offset: 81920 on job bdev=Nvme0n1 fails 00:37:24.439 00:37:24.439 Latency(us) 00:37:24.439 [2024-11-18T07:11:17.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:24.439 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:24.439 Job: Nvme0n1 ended in about 0.41 seconds with error 00:37:24.439 Verification LBA range: start 0x0 length 0x400 00:37:24.439 Nvme0n1 : 0.41 1576.53 98.53 157.65 0.00 35851.77 3021.94 34952.53 00:37:24.439 [2024-11-18T07:11:17.527Z] =================================================================================================================== 00:37:24.439 [2024-11-18T07:11:17.527Z] Total : 1576.53 98.53 157.65 0.00 35851.77 3021.94 34952.53 00:37:24.439 [2024-11-18 08:11:17.467831] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:24.439 [2024-11-18 08:11:17.467884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202e970 (9): Bad file descriptor 00:37:24.439 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.439 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:24.439 [2024-11-18 08:11:17.471519] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:37:25.811 08:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 912019 00:37:25.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (912019) - No such process 00:37:25.811 08:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:25.811 08:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:25.811 08:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:25.811 08:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:25.811 08:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:25.811 08:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:25.811 08:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:25.811 08:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:25.811 { 00:37:25.811 "params": { 00:37:25.811 "name": "Nvme$subsystem", 00:37:25.811 "trtype": "$TEST_TRANSPORT", 00:37:25.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:25.811 "adrfam": "ipv4", 00:37:25.811 "trsvcid": "$NVMF_PORT", 00:37:25.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:25.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:25.812 "hdgst": ${hdgst:-false}, 00:37:25.812 "ddgst": ${ddgst:-false} 00:37:25.812 }, 00:37:25.812 "method": "bdev_nvme_attach_controller" 00:37:25.812 } 00:37:25.812 EOF 00:37:25.812 )") 00:37:25.812 08:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:25.812 08:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:25.812 08:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:25.812 08:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:25.812 "params": { 00:37:25.812 "name": "Nvme0", 00:37:25.812 "trtype": "tcp", 00:37:25.812 "traddr": "10.0.0.2", 00:37:25.812 "adrfam": "ipv4", 00:37:25.812 "trsvcid": "4420", 00:37:25.812 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:25.812 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:25.812 "hdgst": false, 00:37:25.812 "ddgst": false 00:37:25.812 }, 00:37:25.812 "method": "bdev_nvme_attach_controller" 00:37:25.812 }' 00:37:25.812 [2024-11-18 08:11:18.523708] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:37:25.812 [2024-11-18 08:11:18.523819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid912242 ] 00:37:25.812 [2024-11-18 08:11:18.594725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:25.812 [2024-11-18 08:11:18.641032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:25.812 Running I/O for 1 seconds... 00:37:27.002 1664.00 IOPS, 104.00 MiB/s 00:37:27.002 Latency(us) 00:37:27.002 [2024-11-18T07:11:20.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.002 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:27.002 Verification LBA range: start 0x0 length 0x400 00:37:27.002 Nvme0n1 : 1.02 1697.10 106.07 0.00 0.00 37097.64 5024.43 33787.45 00:37:27.002 [2024-11-18T07:11:20.090Z] =================================================================================================================== 00:37:27.002 [2024-11-18T07:11:20.090Z] Total : 1697.10 106.07 0.00 0.00 37097.64 5024.43 33787.45 00:37:27.002 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:27.002 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:27.002 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:27.002 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:27.002 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:27.002 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:27.002 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:27.002 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:27.002 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:27.002 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:27.002 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:27.002 rmmod nvme_tcp 00:37:27.002 rmmod nvme_fabrics 00:37:27.002 rmmod nvme_keyring 00:37:27.002 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:27.002 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:27.002 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:27.002 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 911972 ']' 00:37:27.002 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 911972 00:37:27.002 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 911972 ']' 00:37:27.002 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 911972 00:37:27.003 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:37:27.003 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:27.003 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 911972 00:37:27.261 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:27.261 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:27.261 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 911972' 00:37:27.261 killing process with pid 911972 00:37:27.261 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 911972 00:37:27.261 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 911972 00:37:27.261 [2024-11-18 08:11:20.294906] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:27.261 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:27.261 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:27.261 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:27.261 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:27.261 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:37:27.261 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:37:27.261 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:27.261 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:27.261 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:27.261 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:27.261 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:27.261 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:29.803 00:37:29.803 real 0m8.554s 00:37:29.803 user 0m16.669s 00:37:29.803 sys 0m3.515s 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:29.803 ************************************ 00:37:29.803 END TEST nvmf_host_management 00:37:29.803 ************************************ 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:29.803 ************************************ 00:37:29.803 START TEST nvmf_lvol 00:37:29.803 ************************************ 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:29.803 * Looking for test storage... 00:37:29.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:29.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.803 --rc genhtml_branch_coverage=1 00:37:29.803 --rc genhtml_function_coverage=1 00:37:29.803 --rc genhtml_legend=1 00:37:29.803 --rc geninfo_all_blocks=1 00:37:29.803 --rc geninfo_unexecuted_blocks=1 00:37:29.803 00:37:29.803 ' 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:29.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.803 --rc genhtml_branch_coverage=1 00:37:29.803 --rc genhtml_function_coverage=1 00:37:29.803 --rc genhtml_legend=1 00:37:29.803 --rc geninfo_all_blocks=1 00:37:29.803 --rc geninfo_unexecuted_blocks=1 00:37:29.803 00:37:29.803 ' 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:29.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.803 --rc genhtml_branch_coverage=1 00:37:29.803 --rc genhtml_function_coverage=1 00:37:29.803 --rc genhtml_legend=1 00:37:29.803 --rc geninfo_all_blocks=1 00:37:29.803 --rc geninfo_unexecuted_blocks=1 00:37:29.803 00:37:29.803 ' 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:29.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.803 --rc genhtml_branch_coverage=1 00:37:29.803 --rc genhtml_function_coverage=1 00:37:29.803 --rc genhtml_legend=1 00:37:29.803 --rc geninfo_all_blocks=1 00:37:29.803 --rc geninfo_unexecuted_blocks=1 00:37:29.803 00:37:29.803 ' 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:29.803 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:29.804 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:31.710 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:31.711 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:31.711 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:31.711 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:31.711 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:31.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:31.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:37:31.711 00:37:31.711 --- 10.0.0.2 ping statistics --- 00:37:31.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:31.711 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:31.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:31.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:37:31.711 00:37:31.711 --- 10.0.0.1 ping statistics --- 00:37:31.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:31.711 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:31.711 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:31.970 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:31.970 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:31.970 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:31.970 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:31.970 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=914368 00:37:31.970 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:31.970 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 914368 00:37:31.970 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 914368 ']' 00:37:31.970 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:31.970 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:31.970 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:31.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:31.970 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:31.970 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:31.970 [2024-11-18 08:11:24.859274] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:31.970 [2024-11-18 08:11:24.860347] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:37:31.970 [2024-11-18 08:11:24.860413] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:31.970 [2024-11-18 08:11:24.931754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:31.970 [2024-11-18 08:11:24.978419] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:31.970 [2024-11-18 08:11:24.978472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:31.970 [2024-11-18 08:11:24.978508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:31.970 [2024-11-18 08:11:24.978520] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:31.970 [2024-11-18 08:11:24.978530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:31.970 [2024-11-18 08:11:24.980020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:31.970 [2024-11-18 08:11:24.980082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:31.970 [2024-11-18 08:11:24.980085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:32.229 [2024-11-18 08:11:25.064558] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:32.229 [2024-11-18 08:11:25.064800] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:32.229 [2024-11-18 08:11:25.064827] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:32.229 [2024-11-18 08:11:25.065085] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:32.229 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:32.229 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:37:32.229 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:32.229 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:32.229 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:32.229 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:32.229 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:32.488 [2024-11-18 08:11:25.364758] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:32.488 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:32.747 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:32.747 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:33.005 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:33.005 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:33.263 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:33.521 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=68630ca8-3f53-4dc1-bb33-275edc21fb9c 00:37:33.521 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 68630ca8-3f53-4dc1-bb33-275edc21fb9c lvol 20 00:37:33.779 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2a8ba59e-8088-4327-bdcd-1329740a5b15 00:37:33.779 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:34.037 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2a8ba59e-8088-4327-bdcd-1329740a5b15 00:37:34.294 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:34.551 [2024-11-18 08:11:27.596920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:34.551 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:34.808 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=914789 00:37:34.809 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:34.809 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:36.183 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2a8ba59e-8088-4327-bdcd-1329740a5b15 MY_SNAPSHOT 00:37:36.183 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=390b0ee2-6f3c-4ef8-9dfb-1e1f635b66fa 00:37:36.183 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2a8ba59e-8088-4327-bdcd-1329740a5b15 30 00:37:36.442 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 390b0ee2-6f3c-4ef8-9dfb-1e1f635b66fa MY_CLONE 00:37:37.007 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=2c1ba96f-9101-45b9-a96b-ddc19eed53c7 00:37:37.007 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 2c1ba96f-9101-45b9-a96b-ddc19eed53c7 00:37:37.573 08:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 914789 00:37:45.688 Initializing NVMe Controllers 00:37:45.688 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:45.688 Controller IO queue size 128, less than required. 00:37:45.688 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:45.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:45.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:45.688 Initialization complete. Launching workers. 00:37:45.688 ======================================================== 00:37:45.688 Latency(us) 00:37:45.688 Device Information : IOPS MiB/s Average min max 00:37:45.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10641.70 41.57 12032.98 6187.67 62828.73 00:37:45.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10509.60 41.05 12181.30 4771.67 73730.70 00:37:45.688 ======================================================== 00:37:45.688 Total : 21151.29 82.62 12106.68 4771.67 73730.70 00:37:45.688 00:37:45.688 08:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:45.688 08:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2a8ba59e-8088-4327-bdcd-1329740a5b15 00:37:45.946 08:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 68630ca8-3f53-4dc1-bb33-275edc21fb9c 00:37:46.204 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:46.205 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:46.205 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:46.205 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:46.205 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:46.205 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:46.205 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:46.205 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:46.205 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:46.205 rmmod nvme_tcp 00:37:46.205 rmmod nvme_fabrics 00:37:46.205 rmmod nvme_keyring 00:37:46.205 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:46.205 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:46.205 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:46.205 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 914368 ']' 00:37:46.205 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 914368 00:37:46.205 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 914368 ']' 00:37:46.205 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 914368 00:37:46.205 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:37:46.205 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:46.205 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 914368 00:37:46.463 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:46.463 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:46.463 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 914368' 00:37:46.463 killing process with pid 914368 00:37:46.463 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 914368 00:37:46.463 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 914368 00:37:46.463 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:46.463 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:46.463 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:46.463 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:46.463 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:37:46.463 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:46.463 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:37:46.463 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:46.463 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:46.463 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:46.463 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:46.463 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:49.001 00:37:49.001 real 0m19.164s 00:37:49.001 user 0m56.272s 00:37:49.001 sys 0m7.897s 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:49.001 ************************************ 00:37:49.001 END TEST nvmf_lvol 00:37:49.001 ************************************ 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:49.001 ************************************ 00:37:49.001 START TEST nvmf_lvs_grow 00:37:49.001 ************************************ 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:49.001 * Looking for test storage... 00:37:49.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:49.001 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:49.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.002 --rc genhtml_branch_coverage=1 00:37:49.002 --rc genhtml_function_coverage=1 00:37:49.002 --rc genhtml_legend=1 00:37:49.002 --rc geninfo_all_blocks=1 00:37:49.002 --rc geninfo_unexecuted_blocks=1 00:37:49.002 00:37:49.002 ' 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:49.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.002 --rc genhtml_branch_coverage=1 00:37:49.002 --rc genhtml_function_coverage=1 00:37:49.002 --rc genhtml_legend=1 00:37:49.002 --rc geninfo_all_blocks=1 00:37:49.002 --rc geninfo_unexecuted_blocks=1 00:37:49.002 00:37:49.002 ' 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:49.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.002 --rc genhtml_branch_coverage=1 00:37:49.002 --rc genhtml_function_coverage=1 00:37:49.002 --rc genhtml_legend=1 00:37:49.002 --rc geninfo_all_blocks=1 00:37:49.002 --rc geninfo_unexecuted_blocks=1 00:37:49.002 00:37:49.002 ' 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:49.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.002 --rc genhtml_branch_coverage=1 00:37:49.002 --rc genhtml_function_coverage=1 00:37:49.002 --rc genhtml_legend=1 00:37:49.002 --rc geninfo_all_blocks=1 00:37:49.002 --rc geninfo_unexecuted_blocks=1 00:37:49.002 00:37:49.002 ' 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:37:49.002 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:50.906 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:50.907 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:50.907 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:50.907 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:50.907 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:50.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:50.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:37:50.907 00:37:50.907 --- 10.0.0.2 ping statistics --- 00:37:50.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:50.907 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:50.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:50.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:37:50.907 00:37:50.907 --- 10.0.0.1 ping statistics --- 00:37:50.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:50.907 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=918044 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 918044 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 918044 ']' 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:50.907 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:50.908 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:50.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:50.908 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:50.908 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:50.908 [2024-11-18 08:11:43.983927] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:50.908 [2024-11-18 08:11:43.985035] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:37:50.908 [2024-11-18 08:11:43.985092] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:51.166 [2024-11-18 08:11:44.057068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:51.166 [2024-11-18 08:11:44.101643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:51.166 [2024-11-18 08:11:44.101694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:51.166 [2024-11-18 08:11:44.101709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:51.166 [2024-11-18 08:11:44.101720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:51.166 [2024-11-18 08:11:44.101730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:51.166 [2024-11-18 08:11:44.102279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:51.166 [2024-11-18 08:11:44.183982] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:51.166 [2024-11-18 08:11:44.184265] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:51.166 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:51.166 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:37:51.166 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:51.166 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:51.166 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:51.166 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:51.166 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:51.424 [2024-11-18 08:11:44.494825] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:51.424 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:37:51.424 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:51.424 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:51.424 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:51.682 ************************************ 00:37:51.682 START TEST lvs_grow_clean 00:37:51.682 ************************************ 00:37:51.682 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:37:51.682 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:51.682 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:51.682 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:51.682 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:51.682 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:51.682 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:51.682 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:51.682 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:51.682 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:51.942 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:51.942 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:52.202 08:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=92643e75-e3d4-4a9e-a267-7dd3814e3ce0 00:37:52.202 08:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 92643e75-e3d4-4a9e-a267-7dd3814e3ce0 00:37:52.202 08:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:52.463 08:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:52.463 08:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:52.463 08:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 92643e75-e3d4-4a9e-a267-7dd3814e3ce0 lvol 150 00:37:52.727 08:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d5381f0d-84f0-4c51-b09a-e7b3f5ec6bd9 00:37:52.727 08:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:52.727 08:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:53.023 [2024-11-18 08:11:45.914729] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:53.023 [2024-11-18 08:11:45.914826] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:53.023 true 00:37:53.023 08:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:53.023 08:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 92643e75-e3d4-4a9e-a267-7dd3814e3ce0 00:37:53.281 08:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:53.281 08:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:53.539 08:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d5381f0d-84f0-4c51-b09a-e7b3f5ec6bd9 00:37:53.797 08:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:54.057 [2024-11-18 08:11:47.011063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:54.057 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:54.317 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=918476 00:37:54.317 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:54.317 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:54.317 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 918476 /var/tmp/bdevperf.sock 00:37:54.317 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 918476 ']' 00:37:54.317 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:54.317 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:54.317 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:54.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:54.317 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:54.317 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:54.317 [2024-11-18 08:11:47.353244] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:37:54.317 [2024-11-18 08:11:47.353339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid918476 ] 00:37:54.577 [2024-11-18 08:11:47.420958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:54.578 [2024-11-18 08:11:47.472247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:54.578 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:54.578 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:37:54.578 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:55.147 Nvme0n1 00:37:55.147 08:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:55.407 [ 00:37:55.407 { 00:37:55.407 "name": "Nvme0n1", 00:37:55.407 "aliases": [ 00:37:55.407 "d5381f0d-84f0-4c51-b09a-e7b3f5ec6bd9" 00:37:55.407 ], 00:37:55.407 "product_name": "NVMe disk", 00:37:55.407 "block_size": 4096, 00:37:55.407 "num_blocks": 38912, 00:37:55.407 "uuid": "d5381f0d-84f0-4c51-b09a-e7b3f5ec6bd9", 00:37:55.407 "numa_id": 0, 00:37:55.407 "assigned_rate_limits": { 00:37:55.407 "rw_ios_per_sec": 0, 00:37:55.407 "rw_mbytes_per_sec": 0, 00:37:55.407 "r_mbytes_per_sec": 0, 00:37:55.407 "w_mbytes_per_sec": 0 00:37:55.407 }, 00:37:55.407 "claimed": false, 00:37:55.407 "zoned": false, 00:37:55.407 "supported_io_types": { 00:37:55.407 "read": true, 00:37:55.407 "write": true, 00:37:55.407 "unmap": true, 00:37:55.407 "flush": true, 00:37:55.407 "reset": true, 00:37:55.407 "nvme_admin": true, 00:37:55.407 "nvme_io": true, 00:37:55.407 "nvme_io_md": false, 00:37:55.407 "write_zeroes": true, 00:37:55.407 "zcopy": false, 00:37:55.407 "get_zone_info": false, 00:37:55.407 "zone_management": false, 00:37:55.407 "zone_append": false, 00:37:55.407 "compare": true, 00:37:55.407 "compare_and_write": true, 00:37:55.407 "abort": true, 00:37:55.407 "seek_hole": false, 00:37:55.407 "seek_data": false, 00:37:55.407 "copy": true, 00:37:55.407 "nvme_iov_md": false 00:37:55.407 }, 00:37:55.407 "memory_domains": [ 00:37:55.407 { 00:37:55.407 "dma_device_id": "system", 00:37:55.407 "dma_device_type": 1 00:37:55.407 } 00:37:55.407 ], 00:37:55.407 "driver_specific": { 00:37:55.407 "nvme": [ 00:37:55.407 { 00:37:55.407 "trid": { 00:37:55.407 "trtype": "TCP", 00:37:55.407 "adrfam": "IPv4", 00:37:55.407 "traddr": "10.0.0.2", 00:37:55.407 "trsvcid": "4420", 00:37:55.407 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:55.407 }, 00:37:55.407 "ctrlr_data": { 00:37:55.407 "cntlid": 1, 00:37:55.407 "vendor_id": "0x8086", 00:37:55.407 "model_number": "SPDK bdev Controller", 00:37:55.407 "serial_number": "SPDK0", 00:37:55.407 "firmware_revision": "25.01", 00:37:55.407 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:55.407 "oacs": { 00:37:55.407 "security": 0, 00:37:55.407 "format": 0, 00:37:55.407 "firmware": 0, 00:37:55.407 "ns_manage": 0 00:37:55.407 }, 00:37:55.407 "multi_ctrlr": true, 00:37:55.407 "ana_reporting": false 00:37:55.407 }, 00:37:55.407 "vs": { 00:37:55.407 "nvme_version": "1.3" 00:37:55.407 }, 00:37:55.407 "ns_data": { 00:37:55.407 "id": 1, 00:37:55.407 "can_share": true 00:37:55.407 } 00:37:55.407 } 00:37:55.407 ], 00:37:55.407 "mp_policy": "active_passive" 00:37:55.407 } 00:37:55.407 } 00:37:55.407 ] 00:37:55.407 08:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=918610 00:37:55.407 08:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:55.407 08:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:55.407 Running I/O for 10 seconds... 00:37:56.345 Latency(us) 00:37:56.345 [2024-11-18T07:11:49.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:56.345 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:56.345 Nvme0n1 : 1.00 14478.00 56.55 0.00 0.00 0.00 0.00 0.00 00:37:56.345 [2024-11-18T07:11:49.433Z] =================================================================================================================== 00:37:56.345 [2024-11-18T07:11:49.433Z] Total : 14478.00 56.55 0.00 0.00 0.00 0.00 0.00 00:37:56.345 00:37:57.284 08:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 92643e75-e3d4-4a9e-a267-7dd3814e3ce0 00:37:57.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:57.542 Nvme0n1 : 2.00 14668.50 57.30 0.00 0.00 0.00 0.00 0.00 00:37:57.542 [2024-11-18T07:11:50.630Z] =================================================================================================================== 00:37:57.542 [2024-11-18T07:11:50.630Z] Total : 14668.50 57.30 0.00 0.00 0.00 0.00 0.00 00:37:57.542 00:37:57.542 true 00:37:57.542 08:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 92643e75-e3d4-4a9e-a267-7dd3814e3ce0 00:37:57.542 08:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:58.108 08:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:58.108 08:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:58.108 08:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 918610 00:37:58.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:58.366 Nvme0n1 : 3.00 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:37:58.366 [2024-11-18T07:11:51.454Z] =================================================================================================================== 00:37:58.366 [2024-11-18T07:11:51.454Z] Total : 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:37:58.366 00:37:59.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:59.746 Nvme0n1 : 4.00 14890.75 58.17 0.00 0.00 0.00 0.00 0.00 00:37:59.746 [2024-11-18T07:11:52.834Z] =================================================================================================================== 00:37:59.746 [2024-11-18T07:11:52.834Z] Total : 14890.75 58.17 0.00 0.00 0.00 0.00 0.00 00:37:59.746 00:38:00.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:00.680 Nvme0n1 : 5.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:38:00.680 [2024-11-18T07:11:53.768Z] =================================================================================================================== 00:38:00.680 [2024-11-18T07:11:53.768Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:38:00.680 00:38:01.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:01.619 Nvme0n1 : 6.00 15049.50 58.79 0.00 0.00 0.00 0.00 0.00 00:38:01.619 [2024-11-18T07:11:54.707Z] =================================================================================================================== 00:38:01.619 [2024-11-18T07:11:54.707Z] Total : 15049.50 58.79 0.00 0.00 0.00 0.00 0.00 00:38:01.619 00:38:02.554 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:02.554 Nvme0n1 : 7.00 15097.29 58.97 0.00 0.00 0.00 0.00 0.00 00:38:02.554 [2024-11-18T07:11:55.642Z] =================================================================================================================== 00:38:02.554 [2024-11-18T07:11:55.642Z] Total : 15097.29 58.97 0.00 0.00 0.00 0.00 0.00 00:38:02.554 00:38:03.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:03.490 Nvme0n1 : 8.00 15146.88 59.17 0.00 0.00 0.00 0.00 0.00 00:38:03.490 [2024-11-18T07:11:56.578Z] =================================================================================================================== 00:38:03.490 [2024-11-18T07:11:56.578Z] Total : 15146.88 59.17 0.00 0.00 0.00 0.00 0.00 00:38:03.490 00:38:04.425 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:04.425 Nvme0n1 : 9.00 15185.44 59.32 0.00 0.00 0.00 0.00 0.00 00:38:04.425 [2024-11-18T07:11:57.513Z] =================================================================================================================== 00:38:04.425 [2024-11-18T07:11:57.513Z] Total : 15185.44 59.32 0.00 0.00 0.00 0.00 0.00 00:38:04.425 00:38:05.363 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:05.363 Nvme0n1 : 10.00 15229.00 59.49 0.00 0.00 0.00 0.00 0.00 00:38:05.363 [2024-11-18T07:11:58.451Z] =================================================================================================================== 00:38:05.363 [2024-11-18T07:11:58.451Z] Total : 15229.00 59.49 0.00 0.00 0.00 0.00 0.00 00:38:05.363 00:38:05.363 00:38:05.363 Latency(us) 00:38:05.363 [2024-11-18T07:11:58.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:05.363 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:05.363 Nvme0n1 : 10.01 15228.67 59.49 0.00 0.00 8400.62 4174.89 18738.44 00:38:05.363 [2024-11-18T07:11:58.451Z] =================================================================================================================== 00:38:05.363 [2024-11-18T07:11:58.451Z] Total : 15228.67 59.49 0.00 0.00 8400.62 4174.89 18738.44 00:38:05.363 { 00:38:05.363 "results": [ 00:38:05.363 { 00:38:05.363 "job": "Nvme0n1", 00:38:05.363 "core_mask": "0x2", 00:38:05.363 "workload": "randwrite", 00:38:05.363 "status": "finished", 00:38:05.363 "queue_depth": 128, 00:38:05.363 "io_size": 4096, 00:38:05.363 "runtime": 10.008625, 00:38:05.363 "iops": 15228.665276199277, 00:38:05.363 "mibps": 59.48697373515343, 00:38:05.363 "io_failed": 0, 00:38:05.363 "io_timeout": 0, 00:38:05.363 "avg_latency_us": 8400.623935016909, 00:38:05.363 "min_latency_us": 4174.885925925926, 00:38:05.363 "max_latency_us": 18738.44148148148 00:38:05.363 } 00:38:05.363 ], 00:38:05.363 "core_count": 1 00:38:05.363 } 00:38:05.624 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 918476 00:38:05.624 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 918476 ']' 00:38:05.624 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 918476 00:38:05.624 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:38:05.624 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:05.624 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 918476 00:38:05.624 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:05.624 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:05.624 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 918476' 00:38:05.624 killing process with pid 918476 00:38:05.624 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 918476 00:38:05.624 Received shutdown signal, test time was about 10.000000 seconds 00:38:05.624 00:38:05.624 Latency(us) 00:38:05.624 [2024-11-18T07:11:58.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:05.624 [2024-11-18T07:11:58.712Z] =================================================================================================================== 00:38:05.624 [2024-11-18T07:11:58.712Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:05.624 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 918476 00:38:05.624 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:05.883 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:06.141 08:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 92643e75-e3d4-4a9e-a267-7dd3814e3ce0 00:38:06.141 08:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:06.706 08:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:06.706 08:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:38:06.706 08:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:06.706 [2024-11-18 08:11:59.750810] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:06.706 08:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 92643e75-e3d4-4a9e-a267-7dd3814e3ce0 00:38:06.706 08:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:38:06.706 08:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 92643e75-e3d4-4a9e-a267-7dd3814e3ce0 00:38:06.706 08:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:06.706 08:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:06.706 08:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:06.706 08:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:06.706 08:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:06.706 08:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:06.706 08:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:06.706 08:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:06.706 08:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 92643e75-e3d4-4a9e-a267-7dd3814e3ce0 00:38:06.966 request: 00:38:06.966 { 00:38:06.966 "uuid": "92643e75-e3d4-4a9e-a267-7dd3814e3ce0", 00:38:06.966 "method": "bdev_lvol_get_lvstores", 00:38:06.966 "req_id": 1 00:38:06.966 } 00:38:06.966 Got JSON-RPC error response 00:38:06.966 response: 00:38:06.966 { 00:38:06.966 "code": -19, 00:38:06.966 "message": "No such device" 00:38:06.966 } 00:38:07.225 08:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:38:07.225 08:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:07.225 08:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:07.225 08:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:07.225 08:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:07.484 aio_bdev 00:38:07.484 08:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d5381f0d-84f0-4c51-b09a-e7b3f5ec6bd9 00:38:07.484 08:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=d5381f0d-84f0-4c51-b09a-e7b3f5ec6bd9 00:38:07.485 08:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:07.485 08:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:38:07.485 08:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:07.485 08:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:07.485 08:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:07.744 08:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d5381f0d-84f0-4c51-b09a-e7b3f5ec6bd9 -t 2000 00:38:08.002 [ 00:38:08.002 { 00:38:08.002 "name": "d5381f0d-84f0-4c51-b09a-e7b3f5ec6bd9", 00:38:08.002 "aliases": [ 00:38:08.002 "lvs/lvol" 00:38:08.002 ], 00:38:08.002 "product_name": "Logical Volume", 00:38:08.002 "block_size": 4096, 00:38:08.002 "num_blocks": 38912, 00:38:08.002 "uuid": "d5381f0d-84f0-4c51-b09a-e7b3f5ec6bd9", 00:38:08.002 "assigned_rate_limits": { 00:38:08.002 "rw_ios_per_sec": 0, 00:38:08.002 "rw_mbytes_per_sec": 0, 00:38:08.002 "r_mbytes_per_sec": 0, 00:38:08.002 "w_mbytes_per_sec": 0 00:38:08.002 }, 00:38:08.002 "claimed": false, 00:38:08.002 "zoned": false, 00:38:08.002 "supported_io_types": { 00:38:08.002 "read": true, 00:38:08.002 "write": true, 00:38:08.002 "unmap": true, 00:38:08.002 "flush": false, 00:38:08.002 "reset": true, 00:38:08.002 "nvme_admin": false, 00:38:08.002 "nvme_io": false, 00:38:08.002 "nvme_io_md": false, 00:38:08.002 "write_zeroes": true, 00:38:08.002 "zcopy": false, 00:38:08.002 "get_zone_info": false, 00:38:08.002 "zone_management": false, 00:38:08.002 "zone_append": false, 00:38:08.002 "compare": false, 00:38:08.002 "compare_and_write": false, 00:38:08.002 "abort": false, 00:38:08.002 "seek_hole": true, 00:38:08.002 "seek_data": true, 00:38:08.002 "copy": false, 00:38:08.002 "nvme_iov_md": false 00:38:08.002 }, 00:38:08.002 "driver_specific": { 00:38:08.002 "lvol": { 00:38:08.002 "lvol_store_uuid": "92643e75-e3d4-4a9e-a267-7dd3814e3ce0", 00:38:08.002 "base_bdev": "aio_bdev", 00:38:08.002 "thin_provision": false, 00:38:08.002 "num_allocated_clusters": 38, 00:38:08.002 "snapshot": false, 00:38:08.002 "clone": false, 00:38:08.002 "esnap_clone": false 00:38:08.002 } 00:38:08.002 } 00:38:08.002 } 00:38:08.002 ] 00:38:08.002 08:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:38:08.002 08:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 92643e75-e3d4-4a9e-a267-7dd3814e3ce0 00:38:08.002 08:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:08.261 08:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:08.261 08:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 92643e75-e3d4-4a9e-a267-7dd3814e3ce0 00:38:08.261 08:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:08.519 08:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:08.519 08:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d5381f0d-84f0-4c51-b09a-e7b3f5ec6bd9 00:38:08.776 08:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 92643e75-e3d4-4a9e-a267-7dd3814e3ce0 00:38:09.034 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:09.293 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:09.293 00:38:09.293 real 0m17.763s 00:38:09.293 user 0m17.329s 00:38:09.293 sys 0m1.849s 00:38:09.293 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:09.293 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:09.293 ************************************ 00:38:09.293 END TEST lvs_grow_clean 00:38:09.293 ************************************ 00:38:09.293 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:38:09.293 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:09.293 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:09.293 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:09.293 ************************************ 00:38:09.293 START TEST lvs_grow_dirty 00:38:09.293 ************************************ 00:38:09.293 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:38:09.293 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:09.293 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:09.293 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:09.293 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:09.293 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:09.293 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:09.293 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:09.293 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:09.293 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:09.552 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:09.552 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:10.122 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ec510cae-8cc2-457a-8ca8-6cfa311ae122 00:38:10.122 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec510cae-8cc2-457a-8ca8-6cfa311ae122 00:38:10.122 08:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:10.122 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:10.122 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:10.122 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ec510cae-8cc2-457a-8ca8-6cfa311ae122 lvol 150 00:38:10.689 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5b5a590e-b9af-40c3-96ad-ce63ee4555aa 00:38:10.689 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:10.689 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:10.689 [2024-11-18 08:12:03.746731] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:10.689 [2024-11-18 08:12:03.746833] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:10.689 true 00:38:10.689 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec510cae-8cc2-457a-8ca8-6cfa311ae122 00:38:10.689 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:11.255 08:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:11.255 08:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:11.255 08:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5b5a590e-b9af-40c3-96ad-ce63ee4555aa 00:38:11.515 08:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:11.775 [2024-11-18 08:12:04.847047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:11.775 08:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:12.344 08:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=920741 00:38:12.344 08:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:12.344 08:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:12.344 08:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 920741 /var/tmp/bdevperf.sock 00:38:12.344 08:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 920741 ']' 00:38:12.344 08:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:12.344 08:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:12.344 08:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:12.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:12.344 08:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:12.344 08:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:12.344 [2024-11-18 08:12:05.182724] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:12.344 [2024-11-18 08:12:05.182827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid920741 ] 00:38:12.344 [2024-11-18 08:12:05.252895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:12.344 [2024-11-18 08:12:05.305891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:12.344 08:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:12.344 08:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:12.344 08:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:12.911 Nvme0n1 00:38:12.911 08:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:13.169 [ 00:38:13.169 { 00:38:13.169 "name": "Nvme0n1", 00:38:13.169 "aliases": [ 00:38:13.169 "5b5a590e-b9af-40c3-96ad-ce63ee4555aa" 00:38:13.169 ], 00:38:13.169 "product_name": "NVMe disk", 00:38:13.169 "block_size": 4096, 00:38:13.169 "num_blocks": 38912, 00:38:13.169 "uuid": "5b5a590e-b9af-40c3-96ad-ce63ee4555aa", 00:38:13.169 "numa_id": 0, 00:38:13.169 "assigned_rate_limits": { 00:38:13.169 "rw_ios_per_sec": 0, 00:38:13.169 "rw_mbytes_per_sec": 0, 00:38:13.169 "r_mbytes_per_sec": 0, 00:38:13.169 "w_mbytes_per_sec": 0 00:38:13.169 }, 00:38:13.169 "claimed": false, 00:38:13.169 "zoned": false, 00:38:13.169 "supported_io_types": { 00:38:13.169 "read": true, 00:38:13.169 "write": true, 00:38:13.169 "unmap": true, 00:38:13.169 "flush": true, 00:38:13.169 "reset": true, 00:38:13.169 "nvme_admin": true, 00:38:13.169 "nvme_io": true, 00:38:13.169 "nvme_io_md": false, 00:38:13.169 "write_zeroes": true, 00:38:13.169 "zcopy": false, 00:38:13.169 "get_zone_info": false, 00:38:13.169 "zone_management": false, 00:38:13.169 "zone_append": false, 00:38:13.169 "compare": true, 00:38:13.169 "compare_and_write": true, 00:38:13.169 "abort": true, 00:38:13.169 "seek_hole": false, 00:38:13.169 "seek_data": false, 00:38:13.169 "copy": true, 00:38:13.169 "nvme_iov_md": false 00:38:13.169 }, 00:38:13.169 "memory_domains": [ 00:38:13.169 { 00:38:13.169 "dma_device_id": "system", 00:38:13.169 "dma_device_type": 1 00:38:13.169 } 00:38:13.169 ], 00:38:13.169 "driver_specific": { 00:38:13.169 "nvme": [ 00:38:13.169 { 00:38:13.169 "trid": { 00:38:13.169 "trtype": "TCP", 00:38:13.169 "adrfam": "IPv4", 00:38:13.169 "traddr": "10.0.0.2", 00:38:13.169 "trsvcid": "4420", 00:38:13.169 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:13.169 }, 00:38:13.169 "ctrlr_data": { 00:38:13.169 "cntlid": 1, 00:38:13.169 "vendor_id": "0x8086", 00:38:13.169 "model_number": "SPDK bdev Controller", 00:38:13.169 "serial_number": "SPDK0", 00:38:13.169 "firmware_revision": "25.01", 00:38:13.169 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:13.169 "oacs": { 00:38:13.169 "security": 0, 00:38:13.169 "format": 0, 00:38:13.169 "firmware": 0, 00:38:13.169 "ns_manage": 0 00:38:13.169 }, 00:38:13.169 "multi_ctrlr": true, 00:38:13.169 "ana_reporting": false 00:38:13.169 }, 00:38:13.169 "vs": { 00:38:13.169 "nvme_version": "1.3" 00:38:13.169 }, 00:38:13.169 "ns_data": { 00:38:13.169 "id": 1, 00:38:13.169 "can_share": true 00:38:13.169 } 00:38:13.169 } 00:38:13.169 ], 00:38:13.169 "mp_policy": "active_passive" 00:38:13.169 } 00:38:13.169 } 00:38:13.169 ] 00:38:13.169 08:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=920795 00:38:13.169 08:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:13.169 08:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:13.427 Running I/O for 10 seconds... 00:38:14.368 Latency(us) 00:38:14.368 [2024-11-18T07:12:07.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:14.368 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:14.368 Nvme0n1 : 1.00 14669.00 57.30 0.00 0.00 0.00 0.00 0.00 00:38:14.368 [2024-11-18T07:12:07.456Z] =================================================================================================================== 00:38:14.368 [2024-11-18T07:12:07.456Z] Total : 14669.00 57.30 0.00 0.00 0.00 0.00 0.00 00:38:14.368 00:38:15.304 08:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ec510cae-8cc2-457a-8ca8-6cfa311ae122 00:38:15.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:15.304 Nvme0n1 : 2.00 14795.50 57.79 0.00 0.00 0.00 0.00 0.00 00:38:15.304 [2024-11-18T07:12:08.392Z] =================================================================================================================== 00:38:15.304 [2024-11-18T07:12:08.392Z] Total : 14795.50 57.79 0.00 0.00 0.00 0.00 0.00 00:38:15.304 00:38:15.562 true 00:38:15.562 08:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec510cae-8cc2-457a-8ca8-6cfa311ae122 00:38:15.562 08:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:15.820 08:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:15.820 08:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:15.820 08:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 920795 00:38:16.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:16.388 Nvme0n1 : 3.00 14828.00 57.92 0.00 0.00 0.00 0.00 0.00 00:38:16.388 [2024-11-18T07:12:09.476Z] =================================================================================================================== 00:38:16.388 [2024-11-18T07:12:09.476Z] Total : 14828.00 57.92 0.00 0.00 0.00 0.00 0.00 00:38:16.388 00:38:17.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:17.326 Nvme0n1 : 4.00 14939.50 58.36 0.00 0.00 0.00 0.00 0.00 00:38:17.326 [2024-11-18T07:12:10.414Z] =================================================================================================================== 00:38:17.326 [2024-11-18T07:12:10.414Z] Total : 14939.50 58.36 0.00 0.00 0.00 0.00 0.00 00:38:17.326 00:38:18.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:18.275 Nvme0n1 : 5.00 14999.60 58.59 0.00 0.00 0.00 0.00 0.00 00:38:18.275 [2024-11-18T07:12:11.363Z] =================================================================================================================== 00:38:18.275 [2024-11-18T07:12:11.363Z] Total : 14999.60 58.59 0.00 0.00 0.00 0.00 0.00 00:38:18.275 00:38:19.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:19.217 Nvme0n1 : 6.00 15071.50 58.87 0.00 0.00 0.00 0.00 0.00 00:38:19.217 [2024-11-18T07:12:12.305Z] =================================================================================================================== 00:38:19.217 [2024-11-18T07:12:12.305Z] Total : 15071.50 58.87 0.00 0.00 0.00 0.00 0.00 00:38:19.217 00:38:20.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:20.591 Nvme0n1 : 7.00 15140.86 59.14 0.00 0.00 0.00 0.00 0.00 00:38:20.591 [2024-11-18T07:12:13.679Z] =================================================================================================================== 00:38:20.591 [2024-11-18T07:12:13.679Z] Total : 15140.86 59.14 0.00 0.00 0.00 0.00 0.00 00:38:20.591 00:38:21.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:21.532 Nvme0n1 : 8.00 15200.88 59.38 0.00 0.00 0.00 0.00 0.00 00:38:21.532 [2024-11-18T07:12:14.620Z] =================================================================================================================== 00:38:21.532 [2024-11-18T07:12:14.620Z] Total : 15200.88 59.38 0.00 0.00 0.00 0.00 0.00 00:38:21.532 00:38:22.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:22.471 Nvme0n1 : 9.00 15219.33 59.45 0.00 0.00 0.00 0.00 0.00 00:38:22.471 [2024-11-18T07:12:15.559Z] =================================================================================================================== 00:38:22.471 [2024-11-18T07:12:15.559Z] Total : 15219.33 59.45 0.00 0.00 0.00 0.00 0.00 00:38:22.471 00:38:23.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:23.409 Nvme0n1 : 10.00 15246.80 59.56 0.00 0.00 0.00 0.00 0.00 00:38:23.409 [2024-11-18T07:12:16.497Z] =================================================================================================================== 00:38:23.409 [2024-11-18T07:12:16.497Z] Total : 15246.80 59.56 0.00 0.00 0.00 0.00 0.00 00:38:23.409 00:38:23.409 00:38:23.409 Latency(us) 00:38:23.409 [2024-11-18T07:12:16.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:23.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:23.409 Nvme0n1 : 10.01 15250.43 59.57 0.00 0.00 8388.69 4320.52 18544.26 00:38:23.409 [2024-11-18T07:12:16.497Z] =================================================================================================================== 00:38:23.409 [2024-11-18T07:12:16.497Z] Total : 15250.43 59.57 0.00 0.00 8388.69 4320.52 18544.26 00:38:23.409 { 00:38:23.409 "results": [ 00:38:23.409 { 00:38:23.409 "job": "Nvme0n1", 00:38:23.409 "core_mask": "0x2", 00:38:23.409 "workload": "randwrite", 00:38:23.409 "status": "finished", 00:38:23.409 "queue_depth": 128, 00:38:23.409 "io_size": 4096, 00:38:23.409 "runtime": 10.006015, 00:38:23.409 "iops": 15250.426868238754, 00:38:23.409 "mibps": 59.57197995405763, 00:38:23.409 "io_failed": 0, 00:38:23.409 "io_timeout": 0, 00:38:23.409 "avg_latency_us": 8388.689504603295, 00:38:23.409 "min_latency_us": 4320.521481481482, 00:38:23.409 "max_latency_us": 18544.26074074074 00:38:23.409 } 00:38:23.409 ], 00:38:23.409 "core_count": 1 00:38:23.409 } 00:38:23.409 08:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 920741 00:38:23.409 08:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 920741 ']' 00:38:23.409 08:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 920741 00:38:23.409 08:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:38:23.409 08:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:23.409 08:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 920741 00:38:23.409 08:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:23.409 08:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:23.409 08:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 920741' 00:38:23.409 killing process with pid 920741 00:38:23.409 08:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 920741 00:38:23.409 Received shutdown signal, test time was about 10.000000 seconds 00:38:23.409 00:38:23.409 Latency(us) 00:38:23.409 [2024-11-18T07:12:16.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:23.409 [2024-11-18T07:12:16.497Z] =================================================================================================================== 00:38:23.409 [2024-11-18T07:12:16.497Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:23.409 08:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 920741 00:38:23.669 08:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:23.927 08:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:24.185 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec510cae-8cc2-457a-8ca8-6cfa311ae122 00:38:24.185 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:24.443 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:24.443 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:24.443 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 918044 00:38:24.443 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 918044 00:38:24.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 918044 Killed "${NVMF_APP[@]}" "$@" 00:38:24.443 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:24.443 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:24.443 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:24.444 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:24.444 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:24.444 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=922603 00:38:24.444 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:24.444 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 922603 00:38:24.444 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 922603 ']' 00:38:24.444 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:24.444 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:24.444 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:24.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:24.444 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:24.444 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:24.444 [2024-11-18 08:12:17.447848] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:24.444 [2024-11-18 08:12:17.448941] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:24.444 [2024-11-18 08:12:17.449012] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:24.444 [2024-11-18 08:12:17.522442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.701 [2024-11-18 08:12:17.567436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:24.701 [2024-11-18 08:12:17.567506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:24.701 [2024-11-18 08:12:17.567536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:24.701 [2024-11-18 08:12:17.567547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:24.701 [2024-11-18 08:12:17.567557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:24.701 [2024-11-18 08:12:17.568163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:24.701 [2024-11-18 08:12:17.654507] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:24.701 [2024-11-18 08:12:17.654815] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:24.701 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:24.701 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:24.701 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:24.701 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:24.701 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:24.701 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:24.701 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:24.960 [2024-11-18 08:12:17.966774] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:24.960 [2024-11-18 08:12:17.966929] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:24.960 [2024-11-18 08:12:17.966978] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:24.960 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:24.960 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5b5a590e-b9af-40c3-96ad-ce63ee4555aa 00:38:24.960 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=5b5a590e-b9af-40c3-96ad-ce63ee4555aa 00:38:24.960 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:24.960 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:24.960 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:24.960 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:24.960 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:25.219 08:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5b5a590e-b9af-40c3-96ad-ce63ee4555aa -t 2000 00:38:25.479 [ 00:38:25.479 { 00:38:25.479 "name": "5b5a590e-b9af-40c3-96ad-ce63ee4555aa", 00:38:25.479 "aliases": [ 00:38:25.479 "lvs/lvol" 00:38:25.479 ], 00:38:25.479 "product_name": "Logical Volume", 00:38:25.479 "block_size": 4096, 00:38:25.479 "num_blocks": 38912, 00:38:25.479 "uuid": "5b5a590e-b9af-40c3-96ad-ce63ee4555aa", 00:38:25.479 "assigned_rate_limits": { 00:38:25.479 "rw_ios_per_sec": 0, 00:38:25.479 "rw_mbytes_per_sec": 0, 00:38:25.479 "r_mbytes_per_sec": 0, 00:38:25.479 "w_mbytes_per_sec": 0 00:38:25.479 }, 00:38:25.479 "claimed": false, 00:38:25.479 "zoned": false, 00:38:25.479 "supported_io_types": { 00:38:25.479 "read": true, 00:38:25.479 "write": true, 00:38:25.479 "unmap": true, 00:38:25.479 "flush": false, 00:38:25.479 "reset": true, 00:38:25.479 "nvme_admin": false, 00:38:25.479 "nvme_io": false, 00:38:25.479 "nvme_io_md": false, 00:38:25.479 "write_zeroes": true, 00:38:25.479 "zcopy": false, 00:38:25.479 "get_zone_info": false, 00:38:25.479 "zone_management": false, 00:38:25.479 "zone_append": false, 00:38:25.479 "compare": false, 00:38:25.479 "compare_and_write": false, 00:38:25.479 "abort": false, 00:38:25.479 "seek_hole": true, 00:38:25.479 "seek_data": true, 00:38:25.479 "copy": false, 00:38:25.479 "nvme_iov_md": false 00:38:25.479 }, 00:38:25.479 "driver_specific": { 00:38:25.479 "lvol": { 00:38:25.479 "lvol_store_uuid": "ec510cae-8cc2-457a-8ca8-6cfa311ae122", 00:38:25.479 "base_bdev": "aio_bdev", 00:38:25.479 "thin_provision": false, 00:38:25.479 "num_allocated_clusters": 38, 00:38:25.479 "snapshot": false, 00:38:25.479 "clone": false, 00:38:25.479 "esnap_clone": false 00:38:25.479 } 00:38:25.479 } 00:38:25.479 } 00:38:25.479 ] 00:38:25.479 08:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:25.479 08:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec510cae-8cc2-457a-8ca8-6cfa311ae122 00:38:25.479 08:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:25.738 08:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:25.738 08:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec510cae-8cc2-457a-8ca8-6cfa311ae122 00:38:25.738 08:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:25.998 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:25.998 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:26.265 [2024-11-18 08:12:19.332669] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:26.528 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec510cae-8cc2-457a-8ca8-6cfa311ae122 00:38:26.528 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:38:26.528 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec510cae-8cc2-457a-8ca8-6cfa311ae122 00:38:26.528 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:26.528 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:26.528 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:26.528 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:26.528 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:26.528 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:26.528 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:26.528 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:26.528 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec510cae-8cc2-457a-8ca8-6cfa311ae122 00:38:26.787 request: 00:38:26.787 { 00:38:26.787 "uuid": "ec510cae-8cc2-457a-8ca8-6cfa311ae122", 00:38:26.787 "method": "bdev_lvol_get_lvstores", 00:38:26.787 "req_id": 1 00:38:26.787 } 00:38:26.787 Got JSON-RPC error response 00:38:26.787 response: 00:38:26.787 { 00:38:26.787 "code": -19, 00:38:26.787 "message": "No such device" 00:38:26.787 } 00:38:26.787 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:38:26.787 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:26.787 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:26.787 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:26.787 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:27.045 aio_bdev 00:38:27.045 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5b5a590e-b9af-40c3-96ad-ce63ee4555aa 00:38:27.045 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=5b5a590e-b9af-40c3-96ad-ce63ee4555aa 00:38:27.045 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:27.045 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:27.045 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:27.045 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:27.045 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:27.304 08:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5b5a590e-b9af-40c3-96ad-ce63ee4555aa -t 2000 00:38:27.564 [ 00:38:27.564 { 00:38:27.564 "name": "5b5a590e-b9af-40c3-96ad-ce63ee4555aa", 00:38:27.564 "aliases": [ 00:38:27.564 "lvs/lvol" 00:38:27.564 ], 00:38:27.564 "product_name": "Logical Volume", 00:38:27.564 "block_size": 4096, 00:38:27.564 "num_blocks": 38912, 00:38:27.564 "uuid": "5b5a590e-b9af-40c3-96ad-ce63ee4555aa", 00:38:27.564 "assigned_rate_limits": { 00:38:27.564 "rw_ios_per_sec": 0, 00:38:27.564 "rw_mbytes_per_sec": 0, 00:38:27.564 "r_mbytes_per_sec": 0, 00:38:27.564 "w_mbytes_per_sec": 0 00:38:27.564 }, 00:38:27.564 "claimed": false, 00:38:27.564 "zoned": false, 00:38:27.564 "supported_io_types": { 00:38:27.564 "read": true, 00:38:27.564 "write": true, 00:38:27.564 "unmap": true, 00:38:27.564 "flush": false, 00:38:27.564 "reset": true, 00:38:27.564 "nvme_admin": false, 00:38:27.564 "nvme_io": false, 00:38:27.564 "nvme_io_md": false, 00:38:27.564 "write_zeroes": true, 00:38:27.564 "zcopy": false, 00:38:27.564 "get_zone_info": false, 00:38:27.564 "zone_management": false, 00:38:27.564 "zone_append": false, 00:38:27.564 "compare": false, 00:38:27.564 "compare_and_write": false, 00:38:27.564 "abort": false, 00:38:27.564 "seek_hole": true, 00:38:27.564 "seek_data": true, 00:38:27.564 "copy": false, 00:38:27.564 "nvme_iov_md": false 00:38:27.564 }, 00:38:27.564 "driver_specific": { 00:38:27.564 "lvol": { 00:38:27.565 "lvol_store_uuid": "ec510cae-8cc2-457a-8ca8-6cfa311ae122", 00:38:27.565 "base_bdev": "aio_bdev", 00:38:27.565 "thin_provision": false, 00:38:27.565 "num_allocated_clusters": 38, 00:38:27.565 "snapshot": false, 00:38:27.565 "clone": false, 00:38:27.565 "esnap_clone": false 00:38:27.565 } 00:38:27.565 } 00:38:27.565 } 00:38:27.565 ] 00:38:27.565 08:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:27.565 08:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec510cae-8cc2-457a-8ca8-6cfa311ae122 00:38:27.565 08:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:27.825 08:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:27.825 08:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec510cae-8cc2-457a-8ca8-6cfa311ae122 00:38:27.825 08:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:28.085 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:28.085 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5b5a590e-b9af-40c3-96ad-ce63ee4555aa 00:38:28.345 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ec510cae-8cc2-457a-8ca8-6cfa311ae122 00:38:28.603 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:28.862 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:28.862 00:38:28.862 real 0m19.505s 00:38:28.862 user 0m36.029s 00:38:28.862 sys 0m4.896s 00:38:28.862 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:28.862 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:28.862 ************************************ 00:38:28.862 END TEST lvs_grow_dirty 00:38:28.862 ************************************ 00:38:28.862 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:28.862 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:38:28.862 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:38:28.862 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:38:28.862 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:28.862 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:38:28.862 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:38:28.862 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:38:28.862 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:28.862 nvmf_trace.0 00:38:28.862 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:38:28.862 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:28.862 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:28.862 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:28.862 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:28.862 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:28.862 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:28.862 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:28.862 rmmod nvme_tcp 00:38:28.862 rmmod nvme_fabrics 00:38:29.121 rmmod nvme_keyring 00:38:29.121 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:29.121 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:29.121 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:29.121 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 922603 ']' 00:38:29.121 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 922603 00:38:29.121 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 922603 ']' 00:38:29.121 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 922603 00:38:29.121 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:38:29.121 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:29.121 08:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 922603 00:38:29.121 08:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:29.121 08:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:29.121 08:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 922603' 00:38:29.121 killing process with pid 922603 00:38:29.121 08:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 922603 00:38:29.121 08:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 922603 00:38:29.121 08:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:29.121 08:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:29.121 08:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:29.121 08:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:29.121 08:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:38:29.121 08:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:29.121 08:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:38:29.382 08:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:29.382 08:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:29.382 08:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:29.382 08:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:29.382 08:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:31.292 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:31.292 00:38:31.292 real 0m42.628s 00:38:31.292 user 0m55.067s 00:38:31.292 sys 0m8.689s 00:38:31.292 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:31.292 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:31.292 ************************************ 00:38:31.292 END TEST nvmf_lvs_grow 00:38:31.292 ************************************ 00:38:31.292 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:31.292 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:31.293 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:31.293 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:31.293 ************************************ 00:38:31.293 START TEST nvmf_bdev_io_wait 00:38:31.293 ************************************ 00:38:31.293 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:31.293 * Looking for test storage... 00:38:31.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:31.293 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:31.293 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:38:31.293 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:31.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:31.553 --rc genhtml_branch_coverage=1 00:38:31.553 --rc genhtml_function_coverage=1 00:38:31.553 --rc genhtml_legend=1 00:38:31.553 --rc geninfo_all_blocks=1 00:38:31.553 --rc geninfo_unexecuted_blocks=1 00:38:31.553 00:38:31.553 ' 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:31.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:31.553 --rc genhtml_branch_coverage=1 00:38:31.553 --rc genhtml_function_coverage=1 00:38:31.553 --rc genhtml_legend=1 00:38:31.553 --rc geninfo_all_blocks=1 00:38:31.553 --rc geninfo_unexecuted_blocks=1 00:38:31.553 00:38:31.553 ' 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:31.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:31.553 --rc genhtml_branch_coverage=1 00:38:31.553 --rc genhtml_function_coverage=1 00:38:31.553 --rc genhtml_legend=1 00:38:31.553 --rc geninfo_all_blocks=1 00:38:31.553 --rc geninfo_unexecuted_blocks=1 00:38:31.553 00:38:31.553 ' 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:31.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:31.553 --rc genhtml_branch_coverage=1 00:38:31.553 --rc genhtml_function_coverage=1 00:38:31.553 --rc genhtml_legend=1 00:38:31.553 --rc geninfo_all_blocks=1 00:38:31.553 --rc geninfo_unexecuted_blocks=1 00:38:31.553 00:38:31.553 ' 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:31.553 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:31.554 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:34.091 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:34.091 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:34.092 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:34.092 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:34.092 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:34.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:34.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:38:34.092 00:38:34.092 --- 10.0.0.2 ping statistics --- 00:38:34.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:34.092 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:34.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:34.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:38:34.092 00:38:34.092 --- 10.0.0.1 ping statistics --- 00:38:34.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:34.092 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=925200 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 925200 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 925200 ']' 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:34.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:34.092 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:34.092 [2024-11-18 08:12:26.861558] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:34.092 [2024-11-18 08:12:26.862649] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:34.093 [2024-11-18 08:12:26.862709] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:34.093 [2024-11-18 08:12:26.933334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:34.093 [2024-11-18 08:12:26.978975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:34.093 [2024-11-18 08:12:26.979035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:34.093 [2024-11-18 08:12:26.979049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:34.093 [2024-11-18 08:12:26.979060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:34.093 [2024-11-18 08:12:26.979069] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:34.093 [2024-11-18 08:12:26.980546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:34.093 [2024-11-18 08:12:26.980610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:34.093 [2024-11-18 08:12:26.980678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:34.093 [2024-11-18 08:12:26.980681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:34.093 [2024-11-18 08:12:26.981135] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:34.093 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:34.093 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:38:34.093 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:34.093 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:34.093 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:34.093 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:34.093 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:34.093 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.093 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:34.093 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.093 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:34.093 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.093 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:34.093 [2024-11-18 08:12:27.175321] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:34.093 [2024-11-18 08:12:27.175529] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:34.093 [2024-11-18 08:12:27.176410] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:34.093 [2024-11-18 08:12:27.177322] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:34.352 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.352 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:34.352 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.352 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:34.352 [2024-11-18 08:12:27.189363] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:34.352 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.352 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:34.352 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.352 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:34.352 Malloc0 00:38:34.352 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.352 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:34.352 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.352 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:34.352 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.352 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:34.352 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.352 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:34.352 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:34.353 [2024-11-18 08:12:27.249517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=925264 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=925266 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=925268 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:34.353 { 00:38:34.353 "params": { 00:38:34.353 "name": "Nvme$subsystem", 00:38:34.353 "trtype": "$TEST_TRANSPORT", 00:38:34.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:34.353 "adrfam": "ipv4", 00:38:34.353 "trsvcid": "$NVMF_PORT", 00:38:34.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:34.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:34.353 "hdgst": ${hdgst:-false}, 00:38:34.353 "ddgst": ${ddgst:-false} 00:38:34.353 }, 00:38:34.353 "method": "bdev_nvme_attach_controller" 00:38:34.353 } 00:38:34.353 EOF 00:38:34.353 )") 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:34.353 { 00:38:34.353 "params": { 00:38:34.353 "name": "Nvme$subsystem", 00:38:34.353 "trtype": "$TEST_TRANSPORT", 00:38:34.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:34.353 "adrfam": "ipv4", 00:38:34.353 "trsvcid": "$NVMF_PORT", 00:38:34.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:34.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:34.353 "hdgst": ${hdgst:-false}, 00:38:34.353 "ddgst": ${ddgst:-false} 00:38:34.353 }, 00:38:34.353 "method": "bdev_nvme_attach_controller" 00:38:34.353 } 00:38:34.353 EOF 00:38:34.353 )") 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=925270 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:34.353 { 00:38:34.353 "params": { 00:38:34.353 "name": "Nvme$subsystem", 00:38:34.353 "trtype": "$TEST_TRANSPORT", 00:38:34.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:34.353 "adrfam": "ipv4", 00:38:34.353 "trsvcid": "$NVMF_PORT", 00:38:34.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:34.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:34.353 "hdgst": ${hdgst:-false}, 00:38:34.353 "ddgst": ${ddgst:-false} 00:38:34.353 }, 00:38:34.353 "method": "bdev_nvme_attach_controller" 00:38:34.353 } 00:38:34.353 EOF 00:38:34.353 )") 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:34.353 { 00:38:34.353 "params": { 00:38:34.353 "name": "Nvme$subsystem", 00:38:34.353 "trtype": "$TEST_TRANSPORT", 00:38:34.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:34.353 "adrfam": "ipv4", 00:38:34.353 "trsvcid": "$NVMF_PORT", 00:38:34.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:34.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:34.353 "hdgst": ${hdgst:-false}, 00:38:34.353 "ddgst": ${ddgst:-false} 00:38:34.353 }, 00:38:34.353 "method": "bdev_nvme_attach_controller" 00:38:34.353 } 00:38:34.353 EOF 00:38:34.353 )") 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 925264 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:34.353 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:34.353 "params": { 00:38:34.353 "name": "Nvme1", 00:38:34.353 "trtype": "tcp", 00:38:34.353 "traddr": "10.0.0.2", 00:38:34.354 "adrfam": "ipv4", 00:38:34.354 "trsvcid": "4420", 00:38:34.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:34.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:34.354 "hdgst": false, 00:38:34.354 "ddgst": false 00:38:34.354 }, 00:38:34.354 "method": "bdev_nvme_attach_controller" 00:38:34.354 }' 00:38:34.354 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:34.354 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:34.354 "params": { 00:38:34.354 "name": "Nvme1", 00:38:34.354 "trtype": "tcp", 00:38:34.354 "traddr": "10.0.0.2", 00:38:34.354 "adrfam": "ipv4", 00:38:34.354 "trsvcid": "4420", 00:38:34.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:34.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:34.354 "hdgst": false, 00:38:34.354 "ddgst": false 00:38:34.354 }, 00:38:34.354 "method": "bdev_nvme_attach_controller" 00:38:34.354 }' 00:38:34.354 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:34.354 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:34.354 "params": { 00:38:34.354 "name": "Nvme1", 00:38:34.354 "trtype": "tcp", 00:38:34.354 "traddr": "10.0.0.2", 00:38:34.354 "adrfam": "ipv4", 00:38:34.354 "trsvcid": "4420", 00:38:34.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:34.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:34.354 "hdgst": false, 00:38:34.354 "ddgst": false 00:38:34.354 }, 00:38:34.354 "method": "bdev_nvme_attach_controller" 00:38:34.354 }' 00:38:34.354 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:34.354 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:34.354 "params": { 00:38:34.354 "name": "Nvme1", 00:38:34.354 "trtype": "tcp", 00:38:34.354 "traddr": "10.0.0.2", 00:38:34.354 "adrfam": "ipv4", 00:38:34.354 "trsvcid": "4420", 00:38:34.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:34.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:34.354 "hdgst": false, 00:38:34.354 "ddgst": false 00:38:34.354 }, 00:38:34.354 "method": "bdev_nvme_attach_controller" 00:38:34.354 }' 00:38:34.354 [2024-11-18 08:12:27.302587] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:34.354 [2024-11-18 08:12:27.302665] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:34.354 [2024-11-18 08:12:27.302693] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:34.354 [2024-11-18 08:12:27.302692] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:34.354 [2024-11-18 08:12:27.302693] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:34.354 [2024-11-18 08:12:27.302792] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 08:12:27.302791] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 08:12:27.302793] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:38:34.354 --proc-type=auto ] 00:38:34.354 --proc-type=auto ] 00:38:34.614 [2024-11-18 08:12:27.488402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.614 [2024-11-18 08:12:27.530158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:34.614 [2024-11-18 08:12:27.586716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.614 [2024-11-18 08:12:27.628327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:34.614 [2024-11-18 08:12:27.683403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.873 [2024-11-18 08:12:27.727888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:34.873 [2024-11-18 08:12:27.757948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.873 [2024-11-18 08:12:27.796744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:34.873 Running I/O for 1 seconds... 00:38:34.873 Running I/O for 1 seconds... 00:38:35.131 Running I/O for 1 seconds... 00:38:35.131 Running I/O for 1 seconds... 00:38:36.065 193688.00 IOPS, 756.59 MiB/s 00:38:36.066 Latency(us) 00:38:36.066 [2024-11-18T07:12:29.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:36.066 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:36.066 Nvme1n1 : 1.00 193325.01 755.18 0.00 0.00 658.55 286.72 1868.99 00:38:36.066 [2024-11-18T07:12:29.154Z] =================================================================================================================== 00:38:36.066 [2024-11-18T07:12:29.154Z] Total : 193325.01 755.18 0.00 0.00 658.55 286.72 1868.99 00:38:36.066 6765.00 IOPS, 26.43 MiB/s 00:38:36.066 Latency(us) 00:38:36.066 [2024-11-18T07:12:29.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:36.066 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:36.066 Nvme1n1 : 1.02 6767.42 26.44 0.00 0.00 18718.74 4247.70 29321.29 00:38:36.066 [2024-11-18T07:12:29.154Z] =================================================================================================================== 00:38:36.066 [2024-11-18T07:12:29.154Z] Total : 6767.42 26.44 0.00 0.00 18718.74 4247.70 29321.29 00:38:36.066 8889.00 IOPS, 34.72 MiB/s 00:38:36.066 Latency(us) 00:38:36.066 [2024-11-18T07:12:29.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:36.066 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:36.066 Nvme1n1 : 1.01 8943.68 34.94 0.00 0.00 14244.04 6043.88 19806.44 00:38:36.066 [2024-11-18T07:12:29.154Z] =================================================================================================================== 00:38:36.066 [2024-11-18T07:12:29.154Z] Total : 8943.68 34.94 0.00 0.00 14244.04 6043.88 19806.44 00:38:36.066 6300.00 IOPS, 24.61 MiB/s 00:38:36.066 Latency(us) 00:38:36.066 [2024-11-18T07:12:29.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:36.066 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:36.066 Nvme1n1 : 1.01 6390.96 24.96 0.00 0.00 19962.99 4927.34 39807.05 00:38:36.066 [2024-11-18T07:12:29.154Z] =================================================================================================================== 00:38:36.066 [2024-11-18T07:12:29.154Z] Total : 6390.96 24.96 0.00 0.00 19962.99 4927.34 39807.05 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 925266 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 925268 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 925270 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:36.325 rmmod nvme_tcp 00:38:36.325 rmmod nvme_fabrics 00:38:36.325 rmmod nvme_keyring 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 925200 ']' 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 925200 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 925200 ']' 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 925200 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 925200 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:36.325 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 925200' 00:38:36.325 killing process with pid 925200 00:38:36.326 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 925200 00:38:36.326 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 925200 00:38:36.585 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:36.585 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:36.585 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:36.585 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:36.585 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:38:36.585 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:36.585 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:38:36.585 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:36.585 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:36.585 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:36.585 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:36.585 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:38.515 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:38.515 00:38:38.515 real 0m7.249s 00:38:38.515 user 0m14.119s 00:38:38.515 sys 0m3.913s 00:38:38.515 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:38.515 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:38.515 ************************************ 00:38:38.515 END TEST nvmf_bdev_io_wait 00:38:38.515 ************************************ 00:38:38.515 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:38.515 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:38.515 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:38.515 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:38.773 ************************************ 00:38:38.773 START TEST nvmf_queue_depth 00:38:38.773 ************************************ 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:38.773 * Looking for test storage... 00:38:38.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:38.773 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:38.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.774 --rc genhtml_branch_coverage=1 00:38:38.774 --rc genhtml_function_coverage=1 00:38:38.774 --rc genhtml_legend=1 00:38:38.774 --rc geninfo_all_blocks=1 00:38:38.774 --rc geninfo_unexecuted_blocks=1 00:38:38.774 00:38:38.774 ' 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:38.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.774 --rc genhtml_branch_coverage=1 00:38:38.774 --rc genhtml_function_coverage=1 00:38:38.774 --rc genhtml_legend=1 00:38:38.774 --rc geninfo_all_blocks=1 00:38:38.774 --rc geninfo_unexecuted_blocks=1 00:38:38.774 00:38:38.774 ' 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:38.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.774 --rc genhtml_branch_coverage=1 00:38:38.774 --rc genhtml_function_coverage=1 00:38:38.774 --rc genhtml_legend=1 00:38:38.774 --rc geninfo_all_blocks=1 00:38:38.774 --rc geninfo_unexecuted_blocks=1 00:38:38.774 00:38:38.774 ' 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:38.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.774 --rc genhtml_branch_coverage=1 00:38:38.774 --rc genhtml_function_coverage=1 00:38:38.774 --rc genhtml_legend=1 00:38:38.774 --rc geninfo_all_blocks=1 00:38:38.774 --rc geninfo_unexecuted_blocks=1 00:38:38.774 00:38:38.774 ' 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:38.774 08:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:41.306 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:41.306 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:41.306 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:41.306 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:41.306 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:41.307 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:41.307 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:41.307 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:41.307 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:41.307 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:41.307 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:41.307 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:41.307 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:41.307 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:41.307 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:41.307 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:41.307 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:41.307 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:41.307 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:41.307 08:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:41.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:41.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:38:41.307 00:38:41.307 --- 10.0.0.2 ping statistics --- 00:38:41.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.307 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:41.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:41.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:38:41.307 00:38:41.307 --- 10.0.0.1 ping statistics --- 00:38:41.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.307 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=927487 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 927487 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 927487 ']' 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:41.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:41.307 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:41.307 [2024-11-18 08:12:34.256866] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:41.307 [2024-11-18 08:12:34.257954] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:41.307 [2024-11-18 08:12:34.258007] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:41.307 [2024-11-18 08:12:34.337127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:41.307 [2024-11-18 08:12:34.382808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:41.307 [2024-11-18 08:12:34.382856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:41.307 [2024-11-18 08:12:34.382877] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:41.307 [2024-11-18 08:12:34.382893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:41.307 [2024-11-18 08:12:34.382906] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:41.307 [2024-11-18 08:12:34.383441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:41.565 [2024-11-18 08:12:34.463294] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:41.565 [2024-11-18 08:12:34.463653] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:41.565 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:41.565 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:41.565 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:41.565 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:41.565 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:41.565 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:41.565 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:41.565 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.565 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:41.565 [2024-11-18 08:12:34.520056] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:41.565 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.565 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:41.565 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.565 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:41.566 Malloc0 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:41.566 [2024-11-18 08:12:34.580182] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=927511 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 927511 /var/tmp/bdevperf.sock 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 927511 ']' 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:41.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:41.566 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:41.566 [2024-11-18 08:12:34.627882] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:41.566 [2024-11-18 08:12:34.627947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid927511 ] 00:38:41.825 [2024-11-18 08:12:34.694575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:41.825 [2024-11-18 08:12:34.740219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:41.825 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:41.825 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:41.825 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:41.825 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.825 08:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:42.083 NVMe0n1 00:38:42.083 08:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.083 08:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:42.341 Running I/O for 10 seconds... 00:38:44.216 8196.00 IOPS, 32.02 MiB/s [2024-11-18T07:12:38.249Z] 8532.00 IOPS, 33.33 MiB/s [2024-11-18T07:12:39.243Z] 8533.67 IOPS, 33.33 MiB/s [2024-11-18T07:12:40.617Z] 8620.50 IOPS, 33.67 MiB/s [2024-11-18T07:12:41.554Z] 8601.80 IOPS, 33.60 MiB/s [2024-11-18T07:12:42.488Z] 8640.00 IOPS, 33.75 MiB/s [2024-11-18T07:12:43.425Z] 8631.00 IOPS, 33.71 MiB/s [2024-11-18T07:12:44.363Z] 8683.25 IOPS, 33.92 MiB/s [2024-11-18T07:12:45.301Z] 8661.67 IOPS, 33.83 MiB/s [2024-11-18T07:12:45.561Z] 8691.90 IOPS, 33.95 MiB/s 00:38:52.473 Latency(us) 00:38:52.473 [2024-11-18T07:12:45.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:52.473 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:38:52.473 Verification LBA range: start 0x0 length 0x4000 00:38:52.473 NVMe0n1 : 10.10 8701.91 33.99 0.00 0.00 117175.17 21262.79 69516.71 00:38:52.473 [2024-11-18T07:12:45.561Z] =================================================================================================================== 00:38:52.473 [2024-11-18T07:12:45.561Z] Total : 8701.91 33.99 0.00 0.00 117175.17 21262.79 69516.71 00:38:52.473 { 00:38:52.473 "results": [ 00:38:52.473 { 00:38:52.473 "job": "NVMe0n1", 00:38:52.473 "core_mask": "0x1", 00:38:52.473 "workload": "verify", 00:38:52.473 "status": "finished", 00:38:52.473 "verify_range": { 00:38:52.473 "start": 0, 00:38:52.473 "length": 16384 00:38:52.473 }, 00:38:52.473 "queue_depth": 1024, 00:38:52.473 "io_size": 4096, 00:38:52.473 "runtime": 10.102724, 00:38:52.473 "iops": 8701.910494634913, 00:38:52.473 "mibps": 33.99183786966763, 00:38:52.473 "io_failed": 0, 00:38:52.473 "io_timeout": 0, 00:38:52.473 "avg_latency_us": 117175.16780473625, 00:38:52.473 "min_latency_us": 21262.79111111111, 00:38:52.473 "max_latency_us": 69516.70518518519 00:38:52.473 } 00:38:52.473 ], 00:38:52.473 "core_count": 1 00:38:52.473 } 00:38:52.473 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 927511 00:38:52.473 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 927511 ']' 00:38:52.473 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 927511 00:38:52.473 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:52.473 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:52.473 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 927511 00:38:52.473 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:52.473 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:52.473 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 927511' 00:38:52.473 killing process with pid 927511 00:38:52.473 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 927511 00:38:52.473 Received shutdown signal, test time was about 10.000000 seconds 00:38:52.473 00:38:52.473 Latency(us) 00:38:52.473 [2024-11-18T07:12:45.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:52.473 [2024-11-18T07:12:45.561Z] =================================================================================================================== 00:38:52.473 [2024-11-18T07:12:45.561Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:52.473 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 927511 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:52.734 rmmod nvme_tcp 00:38:52.734 rmmod nvme_fabrics 00:38:52.734 rmmod nvme_keyring 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 927487 ']' 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 927487 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 927487 ']' 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 927487 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 927487 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 927487' 00:38:52.734 killing process with pid 927487 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 927487 00:38:52.734 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 927487 00:38:52.994 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:52.994 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:52.994 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:52.994 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:38:52.994 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:38:52.994 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:52.994 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:38:52.994 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:52.994 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:52.994 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:52.994 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:52.994 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:55.529 08:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:55.529 00:38:55.529 real 0m16.397s 00:38:55.529 user 0m22.377s 00:38:55.529 sys 0m3.543s 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:55.529 ************************************ 00:38:55.529 END TEST nvmf_queue_depth 00:38:55.529 ************************************ 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:55.529 ************************************ 00:38:55.529 START TEST nvmf_target_multipath 00:38:55.529 ************************************ 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:55.529 * Looking for test storage... 00:38:55.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:55.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.529 --rc genhtml_branch_coverage=1 00:38:55.529 --rc genhtml_function_coverage=1 00:38:55.529 --rc genhtml_legend=1 00:38:55.529 --rc geninfo_all_blocks=1 00:38:55.529 --rc geninfo_unexecuted_blocks=1 00:38:55.529 00:38:55.529 ' 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:55.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.529 --rc genhtml_branch_coverage=1 00:38:55.529 --rc genhtml_function_coverage=1 00:38:55.529 --rc genhtml_legend=1 00:38:55.529 --rc geninfo_all_blocks=1 00:38:55.529 --rc geninfo_unexecuted_blocks=1 00:38:55.529 00:38:55.529 ' 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:55.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.529 --rc genhtml_branch_coverage=1 00:38:55.529 --rc genhtml_function_coverage=1 00:38:55.529 --rc genhtml_legend=1 00:38:55.529 --rc geninfo_all_blocks=1 00:38:55.529 --rc geninfo_unexecuted_blocks=1 00:38:55.529 00:38:55.529 ' 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:55.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.529 --rc genhtml_branch_coverage=1 00:38:55.529 --rc genhtml_function_coverage=1 00:38:55.529 --rc genhtml_legend=1 00:38:55.529 --rc geninfo_all_blocks=1 00:38:55.529 --rc geninfo_unexecuted_blocks=1 00:38:55.529 00:38:55.529 ' 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:55.529 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:38:55.530 08:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:57.433 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:57.433 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:57.433 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:57.433 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:57.433 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:57.434 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:57.434 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:57.434 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:57.434 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:57.434 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:57.434 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:57.434 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:57.434 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:57.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:57.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:38:57.692 00:38:57.692 --- 10.0.0.2 ping statistics --- 00:38:57.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:57.692 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:57.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:57.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:38:57.692 00:38:57.692 --- 10.0.0.1 ping statistics --- 00:38:57.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:57.692 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:38:57.692 only one NIC for nvmf test 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:57.692 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:57.693 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:57.693 rmmod nvme_tcp 00:38:57.693 rmmod nvme_fabrics 00:38:57.693 rmmod nvme_keyring 00:38:57.693 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:57.693 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:57.693 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:57.693 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:57.693 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:57.693 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:57.693 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:57.693 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:57.693 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:57.693 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:57.693 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:57.693 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:57.693 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:57.693 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:57.693 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:57.693 08:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:59.595 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:59.854 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:59.854 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:59.855 00:38:59.855 real 0m4.639s 00:38:59.855 user 0m0.909s 00:38:59.855 sys 0m1.739s 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:59.855 ************************************ 00:38:59.855 END TEST nvmf_target_multipath 00:38:59.855 ************************************ 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:59.855 ************************************ 00:38:59.855 START TEST nvmf_zcopy 00:38:59.855 ************************************ 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:59.855 * Looking for test storage... 00:38:59.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:59.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.855 --rc genhtml_branch_coverage=1 00:38:59.855 --rc genhtml_function_coverage=1 00:38:59.855 --rc genhtml_legend=1 00:38:59.855 --rc geninfo_all_blocks=1 00:38:59.855 --rc geninfo_unexecuted_blocks=1 00:38:59.855 00:38:59.855 ' 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:59.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.855 --rc genhtml_branch_coverage=1 00:38:59.855 --rc genhtml_function_coverage=1 00:38:59.855 --rc genhtml_legend=1 00:38:59.855 --rc geninfo_all_blocks=1 00:38:59.855 --rc geninfo_unexecuted_blocks=1 00:38:59.855 00:38:59.855 ' 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:59.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.855 --rc genhtml_branch_coverage=1 00:38:59.855 --rc genhtml_function_coverage=1 00:38:59.855 --rc genhtml_legend=1 00:38:59.855 --rc geninfo_all_blocks=1 00:38:59.855 --rc geninfo_unexecuted_blocks=1 00:38:59.855 00:38:59.855 ' 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:59.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.855 --rc genhtml_branch_coverage=1 00:38:59.855 --rc genhtml_function_coverage=1 00:38:59.855 --rc genhtml_legend=1 00:38:59.855 --rc geninfo_all_blocks=1 00:38:59.855 --rc geninfo_unexecuted_blocks=1 00:38:59.855 00:38:59.855 ' 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:59.855 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:38:59.856 08:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:02.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:02.389 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:02.389 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:02.389 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:02.389 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:02.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:02.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:39:02.389 00:39:02.389 --- 10.0.0.2 ping statistics --- 00:39:02.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:02.389 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:02.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:02.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:39:02.389 00:39:02.389 --- 10.0.0.1 ping statistics --- 00:39:02.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:02.389 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=932687 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 932687 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 932687 ']' 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:02.389 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:02.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:02.390 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:02.390 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:02.390 [2024-11-18 08:12:55.348866] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:02.390 [2024-11-18 08:12:55.350053] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:39:02.390 [2024-11-18 08:12:55.350123] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:02.390 [2024-11-18 08:12:55.423901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:02.390 [2024-11-18 08:12:55.471042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:02.390 [2024-11-18 08:12:55.471109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:02.390 [2024-11-18 08:12:55.471132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:02.390 [2024-11-18 08:12:55.471150] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:02.390 [2024-11-18 08:12:55.471164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:02.390 [2024-11-18 08:12:55.471743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:02.648 [2024-11-18 08:12:55.555440] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:02.648 [2024-11-18 08:12:55.555826] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:02.648 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:02.648 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:39:02.648 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:02.648 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:02.648 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:02.648 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:02.648 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:39:02.648 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:39:02.648 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.648 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:02.648 [2024-11-18 08:12:55.612344] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:02.648 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.648 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:02.648 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.648 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:02.648 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.648 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:02.648 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:02.649 [2024-11-18 08:12:55.628548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:02.649 malloc0 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:02.649 { 00:39:02.649 "params": { 00:39:02.649 "name": "Nvme$subsystem", 00:39:02.649 "trtype": "$TEST_TRANSPORT", 00:39:02.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:02.649 "adrfam": "ipv4", 00:39:02.649 "trsvcid": "$NVMF_PORT", 00:39:02.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:02.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:02.649 "hdgst": ${hdgst:-false}, 00:39:02.649 "ddgst": ${ddgst:-false} 00:39:02.649 }, 00:39:02.649 "method": "bdev_nvme_attach_controller" 00:39:02.649 } 00:39:02.649 EOF 00:39:02.649 )") 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:02.649 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:02.649 "params": { 00:39:02.649 "name": "Nvme1", 00:39:02.649 "trtype": "tcp", 00:39:02.649 "traddr": "10.0.0.2", 00:39:02.649 "adrfam": "ipv4", 00:39:02.649 "trsvcid": "4420", 00:39:02.649 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:02.649 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:02.649 "hdgst": false, 00:39:02.649 "ddgst": false 00:39:02.649 }, 00:39:02.649 "method": "bdev_nvme_attach_controller" 00:39:02.649 }' 00:39:02.649 [2024-11-18 08:12:55.710426] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:39:02.649 [2024-11-18 08:12:55.710527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid932833 ] 00:39:02.907 [2024-11-18 08:12:55.777976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:02.907 [2024-11-18 08:12:55.823076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:02.907 Running I/O for 10 seconds... 00:39:05.222 5683.00 IOPS, 44.40 MiB/s [2024-11-18T07:12:59.247Z] 5740.50 IOPS, 44.85 MiB/s [2024-11-18T07:13:00.183Z] 5749.67 IOPS, 44.92 MiB/s [2024-11-18T07:13:01.118Z] 5753.50 IOPS, 44.95 MiB/s [2024-11-18T07:13:02.056Z] 5763.60 IOPS, 45.03 MiB/s [2024-11-18T07:13:03.431Z] 5759.83 IOPS, 45.00 MiB/s [2024-11-18T07:13:04.369Z] 5756.29 IOPS, 44.97 MiB/s [2024-11-18T07:13:05.307Z] 5751.12 IOPS, 44.93 MiB/s [2024-11-18T07:13:06.244Z] 5747.78 IOPS, 44.90 MiB/s [2024-11-18T07:13:06.244Z] 5750.50 IOPS, 44.93 MiB/s 00:39:13.156 Latency(us) 00:39:13.156 [2024-11-18T07:13:06.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:13.156 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:39:13.156 Verification LBA range: start 0x0 length 0x1000 00:39:13.156 Nvme1n1 : 10.02 5752.04 44.94 0.00 0.00 22192.37 2500.08 30486.38 00:39:13.156 [2024-11-18T07:13:06.244Z] =================================================================================================================== 00:39:13.156 [2024-11-18T07:13:06.244Z] Total : 5752.04 44.94 0.00 0.00 22192.37 2500.08 30486.38 00:39:13.156 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=934008 00:39:13.156 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:39:13.156 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:13.156 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:39:13.156 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:39:13.156 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:13.156 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:13.156 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:13.156 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:13.156 { 00:39:13.156 "params": { 00:39:13.156 "name": "Nvme$subsystem", 00:39:13.156 "trtype": "$TEST_TRANSPORT", 00:39:13.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:13.156 "adrfam": "ipv4", 00:39:13.156 "trsvcid": "$NVMF_PORT", 00:39:13.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:13.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:13.156 "hdgst": ${hdgst:-false}, 00:39:13.156 "ddgst": ${ddgst:-false} 00:39:13.156 }, 00:39:13.156 "method": "bdev_nvme_attach_controller" 00:39:13.156 } 00:39:13.156 EOF 00:39:13.156 )") 00:39:13.156 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:13.156 [2024-11-18 08:13:06.240270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.156 [2024-11-18 08:13:06.240320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.156 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:13.156 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:13.156 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:13.156 "params": { 00:39:13.156 "name": "Nvme1", 00:39:13.156 "trtype": "tcp", 00:39:13.156 "traddr": "10.0.0.2", 00:39:13.156 "adrfam": "ipv4", 00:39:13.156 "trsvcid": "4420", 00:39:13.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:13.156 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:13.156 "hdgst": false, 00:39:13.156 "ddgst": false 00:39:13.156 }, 00:39:13.156 "method": "bdev_nvme_attach_controller" 00:39:13.156 }' 00:39:13.415 [2024-11-18 08:13:06.248218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.415 [2024-11-18 08:13:06.248258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.415 [2024-11-18 08:13:06.256208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.415 [2024-11-18 08:13:06.256230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.415 [2024-11-18 08:13:06.264208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.415 [2024-11-18 08:13:06.264230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.415 [2024-11-18 08:13:06.272207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.415 [2024-11-18 08:13:06.272228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.415 [2024-11-18 08:13:06.279670] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:39:13.415 [2024-11-18 08:13:06.279729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid934008 ] 00:39:13.415 [2024-11-18 08:13:06.280221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.415 [2024-11-18 08:13:06.280244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.415 [2024-11-18 08:13:06.288208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.415 [2024-11-18 08:13:06.288230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.415 [2024-11-18 08:13:06.296205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.415 [2024-11-18 08:13:06.296226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.415 [2024-11-18 08:13:06.304205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.415 [2024-11-18 08:13:06.304226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.415 [2024-11-18 08:13:06.312206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.312227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.320204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.320225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.328205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.328225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.336204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.336225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.344204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.344225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.349037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:13.416 [2024-11-18 08:13:06.352209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.352237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.360244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.360281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.368213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.368238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.376205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.376226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.384205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.384226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.392209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.392232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.396275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.416 [2024-11-18 08:13:06.400211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.400233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.408205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.408226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.416233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.416265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.424234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.424269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.432236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.432274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.440240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.440278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.448241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.448278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.456236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.456271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.464207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.464230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.472237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.472272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.480234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.480270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.488214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.488238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.416 [2024-11-18 08:13:06.496217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.416 [2024-11-18 08:13:06.496245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.504213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.504238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.512213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.512238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.520209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.520232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.528337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.528364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.536210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.536232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.544278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.544304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.552208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.552231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.560222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.560246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.568210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.568232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 Running I/O for 5 seconds... 00:39:13.675 [2024-11-18 08:13:06.584455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.584482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.595124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.595164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.608330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.608358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.617800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.617828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.629833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.629859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.646585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.646613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.662741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.662785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.678507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.678536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.696605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.696633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.707167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.707205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.719966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.720004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.729523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.729559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.741586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.741614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.675 [2024-11-18 08:13:06.757632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.675 [2024-11-18 08:13:06.757661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.935 [2024-11-18 08:13:06.767091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.935 [2024-11-18 08:13:06.767133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.935 [2024-11-18 08:13:06.782053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.935 [2024-11-18 08:13:06.782079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.935 [2024-11-18 08:13:06.791175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.935 [2024-11-18 08:13:06.791202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.935 [2024-11-18 08:13:06.805275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.935 [2024-11-18 08:13:06.805318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.935 [2024-11-18 08:13:06.814964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.935 [2024-11-18 08:13:06.814992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.935 [2024-11-18 08:13:06.828637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.935 [2024-11-18 08:13:06.828668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.935 [2024-11-18 08:13:06.838665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.935 [2024-11-18 08:13:06.838691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.935 [2024-11-18 08:13:06.852636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.935 [2024-11-18 08:13:06.852663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.935 [2024-11-18 08:13:06.862602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.935 [2024-11-18 08:13:06.862634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.935 [2024-11-18 08:13:06.876890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.935 [2024-11-18 08:13:06.876916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.935 [2024-11-18 08:13:06.886889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.935 [2024-11-18 08:13:06.886914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.935 [2024-11-18 08:13:06.900283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.935 [2024-11-18 08:13:06.900310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.935 [2024-11-18 08:13:06.909682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.935 [2024-11-18 08:13:06.909709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.936 [2024-11-18 08:13:06.921315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.936 [2024-11-18 08:13:06.921341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.936 [2024-11-18 08:13:06.932059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.936 [2024-11-18 08:13:06.932091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.936 [2024-11-18 08:13:06.942750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.936 [2024-11-18 08:13:06.942778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.936 [2024-11-18 08:13:06.956903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.936 [2024-11-18 08:13:06.956944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.936 [2024-11-18 08:13:06.966527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.936 [2024-11-18 08:13:06.966555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.936 [2024-11-18 08:13:06.978322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.936 [2024-11-18 08:13:06.978348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.936 [2024-11-18 08:13:06.994147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.936 [2024-11-18 08:13:06.994172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.936 [2024-11-18 08:13:07.003301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.936 [2024-11-18 08:13:07.003326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.936 [2024-11-18 08:13:07.017407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.936 [2024-11-18 08:13:07.017447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.027035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.027062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.038722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.038751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.054302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.054345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.063351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.063377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.078896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.078923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.094289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.094317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.103518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.103560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.115213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.115239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.130200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.130241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.139397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.139438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.150840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.150867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.166061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.166095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.175619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.175647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.187366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.187394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.201947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.201975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.211300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.211342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.226228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.226254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.235386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.235414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.249459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.249507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.258899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.258925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.272696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.272724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.196 [2024-11-18 08:13:07.282186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.196 [2024-11-18 08:13:07.282212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.455 [2024-11-18 08:13:07.293728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.455 [2024-11-18 08:13:07.293757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.455 [2024-11-18 08:13:07.308358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.455 [2024-11-18 08:13:07.308399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.455 [2024-11-18 08:13:07.318330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.455 [2024-11-18 08:13:07.318357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.455 [2024-11-18 08:13:07.329947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.455 [2024-11-18 08:13:07.329974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.455 [2024-11-18 08:13:07.346219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.455 [2024-11-18 08:13:07.346260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.455 [2024-11-18 08:13:07.355627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.455 [2024-11-18 08:13:07.355670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.455 [2024-11-18 08:13:07.367211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.455 [2024-11-18 08:13:07.367236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.455 [2024-11-18 08:13:07.381206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.455 [2024-11-18 08:13:07.381234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.456 [2024-11-18 08:13:07.390984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.456 [2024-11-18 08:13:07.391012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.456 [2024-11-18 08:13:07.405391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.456 [2024-11-18 08:13:07.405416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.456 [2024-11-18 08:13:07.415913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.456 [2024-11-18 08:13:07.415938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.456 [2024-11-18 08:13:07.428637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.456 [2024-11-18 08:13:07.428664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.456 [2024-11-18 08:13:07.438318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.456 [2024-11-18 08:13:07.438344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.456 [2024-11-18 08:13:07.450334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.456 [2024-11-18 08:13:07.450359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.456 [2024-11-18 08:13:07.465869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.456 [2024-11-18 08:13:07.465897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.456 [2024-11-18 08:13:07.475337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.456 [2024-11-18 08:13:07.475363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.456 [2024-11-18 08:13:07.489960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.456 [2024-11-18 08:13:07.489987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.456 [2024-11-18 08:13:07.506961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.456 [2024-11-18 08:13:07.506988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.456 [2024-11-18 08:13:07.522531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.456 [2024-11-18 08:13:07.522573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.456 [2024-11-18 08:13:07.532266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.456 [2024-11-18 08:13:07.532293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.456 [2024-11-18 08:13:07.544209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.456 [2024-11-18 08:13:07.544252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.716 [2024-11-18 08:13:07.554949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.716 [2024-11-18 08:13:07.554975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.716 [2024-11-18 08:13:07.568824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.716 [2024-11-18 08:13:07.568868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.716 11733.00 IOPS, 91.66 MiB/s [2024-11-18T07:13:07.804Z] [2024-11-18 08:13:07.578191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.716 [2024-11-18 08:13:07.578219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.716 [2024-11-18 08:13:07.590172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.716 [2024-11-18 08:13:07.590198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.716 [2024-11-18 08:13:07.603924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.716 [2024-11-18 08:13:07.603952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.716 [2024-11-18 08:13:07.613869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.716 [2024-11-18 08:13:07.613898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.716 [2024-11-18 08:13:07.629680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.716 [2024-11-18 08:13:07.629709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.716 [2024-11-18 08:13:07.639593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.716 [2024-11-18 08:13:07.639621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.716 [2024-11-18 08:13:07.651127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.716 [2024-11-18 08:13:07.651156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.716 [2024-11-18 08:13:07.665626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.716 [2024-11-18 08:13:07.665654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.716 [2024-11-18 08:13:07.675578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.716 [2024-11-18 08:13:07.675606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.716 [2024-11-18 08:13:07.687269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.716 [2024-11-18 08:13:07.687295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.717 [2024-11-18 08:13:07.699919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.717 [2024-11-18 08:13:07.699946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.717 [2024-11-18 08:13:07.709577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.717 [2024-11-18 08:13:07.709605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.717 [2024-11-18 08:13:07.725311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.717 [2024-11-18 08:13:07.725337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.717 [2024-11-18 08:13:07.734953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.717 [2024-11-18 08:13:07.734980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.717 [2024-11-18 08:13:07.750389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.717 [2024-11-18 08:13:07.750417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.717 [2024-11-18 08:13:07.768164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.717 [2024-11-18 08:13:07.768190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.717 [2024-11-18 08:13:07.778024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.717 [2024-11-18 08:13:07.778051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.717 [2024-11-18 08:13:07.789870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.717 [2024-11-18 08:13:07.789897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.717 [2024-11-18 08:13:07.800107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.717 [2024-11-18 08:13:07.800138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.998 [2024-11-18 08:13:07.811788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.998 [2024-11-18 08:13:07.811817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.998 [2024-11-18 08:13:07.822610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.998 [2024-11-18 08:13:07.822639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.998 [2024-11-18 08:13:07.837083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.998 [2024-11-18 08:13:07.837112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.998 [2024-11-18 08:13:07.846639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.998 [2024-11-18 08:13:07.846675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.998 [2024-11-18 08:13:07.858516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.998 [2024-11-18 08:13:07.858545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.998 [2024-11-18 08:13:07.874235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.998 [2024-11-18 08:13:07.874263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.998 [2024-11-18 08:13:07.883761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.998 [2024-11-18 08:13:07.883789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.998 [2024-11-18 08:13:07.895866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.998 [2024-11-18 08:13:07.895894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.998 [2024-11-18 08:13:07.906542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.998 [2024-11-18 08:13:07.906571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.998 [2024-11-18 08:13:07.920296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.998 [2024-11-18 08:13:07.920326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.998 [2024-11-18 08:13:07.929819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.998 [2024-11-18 08:13:07.929845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.998 [2024-11-18 08:13:07.941939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.998 [2024-11-18 08:13:07.941966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.998 [2024-11-18 08:13:07.957876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.998 [2024-11-18 08:13:07.957904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.998 [2024-11-18 08:13:07.967395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.998 [2024-11-18 08:13:07.967421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.998 [2024-11-18 08:13:07.981162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.998 [2024-11-18 08:13:07.981205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.998 [2024-11-18 08:13:07.991114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.998 [2024-11-18 08:13:07.991141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.998 [2024-11-18 08:13:08.006862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.998 [2024-11-18 08:13:08.006903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.998 [2024-11-18 08:13:08.024012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.998 [2024-11-18 08:13:08.024039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.998 [2024-11-18 08:13:08.034398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.998 [2024-11-18 08:13:08.034424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.998 [2024-11-18 08:13:08.048985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.998 [2024-11-18 08:13:08.049012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.998 [2024-11-18 08:13:08.058544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.998 [2024-11-18 08:13:08.058572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.071892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.071919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.081285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.081321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.092985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.093014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.103853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.103894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.114632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.114659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.129563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.129591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.138468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.138515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.149991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.150016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.160272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.160298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.171561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.171588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.184360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.184388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.193751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.193793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.205741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.205770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.221436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.221464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.231221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.231247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.244932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.244959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.255016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.255042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.270157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.270199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.279911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.279937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.291667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.291695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.302227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.302259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.317606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.317634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.326634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.326662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.341814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.341854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.350926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.350953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.285 [2024-11-18 08:13:08.364519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.285 [2024-11-18 08:13:08.364547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.545 [2024-11-18 08:13:08.374192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.545 [2024-11-18 08:13:08.374221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.545 [2024-11-18 08:13:08.386086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.545 [2024-11-18 08:13:08.386112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.545 [2024-11-18 08:13:08.401546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.545 [2024-11-18 08:13:08.401589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.545 [2024-11-18 08:13:08.411037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.545 [2024-11-18 08:13:08.411078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.545 [2024-11-18 08:13:08.424464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.545 [2024-11-18 08:13:08.424512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.545 [2024-11-18 08:13:08.433981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.545 [2024-11-18 08:13:08.434006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.545 [2024-11-18 08:13:08.445697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.545 [2024-11-18 08:13:08.445724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.545 [2024-11-18 08:13:08.460949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.545 [2024-11-18 08:13:08.460975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.545 [2024-11-18 08:13:08.469970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.545 [2024-11-18 08:13:08.469994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.545 [2024-11-18 08:13:08.481757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.545 [2024-11-18 08:13:08.481796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.546 [2024-11-18 08:13:08.497403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.546 [2024-11-18 08:13:08.497443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.546 [2024-11-18 08:13:08.506870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.546 [2024-11-18 08:13:08.506896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.546 [2024-11-18 08:13:08.522947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.546 [2024-11-18 08:13:08.522971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.546 [2024-11-18 08:13:08.538551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.546 [2024-11-18 08:13:08.538602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.546 [2024-11-18 08:13:08.548136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.546 [2024-11-18 08:13:08.548162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.546 [2024-11-18 08:13:08.559946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.546 [2024-11-18 08:13:08.559971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.546 [2024-11-18 08:13:08.570654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.546 [2024-11-18 08:13:08.570682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.546 11741.00 IOPS, 91.73 MiB/s [2024-11-18T07:13:08.634Z] [2024-11-18 08:13:08.586539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.546 [2024-11-18 08:13:08.586581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.546 [2024-11-18 08:13:08.596047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.546 [2024-11-18 08:13:08.596073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.546 [2024-11-18 08:13:08.608019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.546 [2024-11-18 08:13:08.608044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.546 [2024-11-18 08:13:08.618846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.546 [2024-11-18 08:13:08.618870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.546 [2024-11-18 08:13:08.634259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.634299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.643977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.644003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.655644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.655672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.667942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.667971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.677728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.677756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.689553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.689580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.700246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.700272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.711669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.711696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.726100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.726146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.735576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.735603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.747176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.747203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.759835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.759863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.769530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.769572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.781000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.781027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.791641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.791669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.805065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.805093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.814194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.814219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.825976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.826016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.837000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.837041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.847689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.847716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.861451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.861477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.871323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.871350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.807 [2024-11-18 08:13:08.885724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.807 [2024-11-18 08:13:08.885751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.067 [2024-11-18 08:13:08.895753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.067 [2024-11-18 08:13:08.895797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.068 [2024-11-18 08:13:08.907394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.068 [2024-11-18 08:13:08.907420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.068 [2024-11-18 08:13:08.922014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.068 [2024-11-18 08:13:08.922055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.068 [2024-11-18 08:13:08.931563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.068 [2024-11-18 08:13:08.931591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.068 [2024-11-18 08:13:08.943390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.068 [2024-11-18 08:13:08.943415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.068 [2024-11-18 08:13:08.958792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.068 [2024-11-18 08:13:08.958817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.068 [2024-11-18 08:13:08.976118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.068 [2024-11-18 08:13:08.976145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.068 [2024-11-18 08:13:08.985608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.068 [2024-11-18 08:13:08.985636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.068 [2024-11-18 08:13:08.997214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.068 [2024-11-18 08:13:08.997240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.068 [2024-11-18 08:13:09.007896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.068 [2024-11-18 08:13:09.007924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.068 [2024-11-18 08:13:09.018791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.068 [2024-11-18 08:13:09.018817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.068 [2024-11-18 08:13:09.034116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.068 [2024-11-18 08:13:09.034142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.068 [2024-11-18 08:13:09.043573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.068 [2024-11-18 08:13:09.043599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.068 [2024-11-18 08:13:09.054984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.068 [2024-11-18 08:13:09.055027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.068 [2024-11-18 08:13:09.067298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.068 [2024-11-18 08:13:09.067336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.068 [2024-11-18 08:13:09.077268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.068 [2024-11-18 08:13:09.077293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.068 [2024-11-18 08:13:09.088992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.068 [2024-11-18 08:13:09.089018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.068 [2024-11-18 08:13:09.099467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.068 [2024-11-18 08:13:09.099515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.068 [2024-11-18 08:13:09.110166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.068 [2024-11-18 08:13:09.110191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.068 [2024-11-18 08:13:09.126267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.068 [2024-11-18 08:13:09.126306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.068 [2024-11-18 08:13:09.135671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.068 [2024-11-18 08:13:09.135699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.068 [2024-11-18 08:13:09.147347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.068 [2024-11-18 08:13:09.147372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.328 [2024-11-18 08:13:09.158385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.328 [2024-11-18 08:13:09.158412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.328 [2024-11-18 08:13:09.173600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.328 [2024-11-18 08:13:09.173628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.328 [2024-11-18 08:13:09.182996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.328 [2024-11-18 08:13:09.183022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.328 [2024-11-18 08:13:09.197628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.328 [2024-11-18 08:13:09.197656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.328 [2024-11-18 08:13:09.208397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.328 [2024-11-18 08:13:09.208436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.328 [2024-11-18 08:13:09.219788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.328 [2024-11-18 08:13:09.219814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.328 [2024-11-18 08:13:09.232407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.328 [2024-11-18 08:13:09.232434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.328 [2024-11-18 08:13:09.242347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.328 [2024-11-18 08:13:09.242374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.328 [2024-11-18 08:13:09.254336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.328 [2024-11-18 08:13:09.254362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.328 [2024-11-18 08:13:09.270289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.328 [2024-11-18 08:13:09.270313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.328 [2024-11-18 08:13:09.288168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.328 [2024-11-18 08:13:09.288194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.328 [2024-11-18 08:13:09.298227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.328 [2024-11-18 08:13:09.298253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.328 [2024-11-18 08:13:09.310006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.328 [2024-11-18 08:13:09.310031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.328 [2024-11-18 08:13:09.325296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.328 [2024-11-18 08:13:09.325322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.328 [2024-11-18 08:13:09.334716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.328 [2024-11-18 08:13:09.334745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.328 [2024-11-18 08:13:09.348598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.328 [2024-11-18 08:13:09.348626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.328 [2024-11-18 08:13:09.358221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.328 [2024-11-18 08:13:09.358248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.328 [2024-11-18 08:13:09.369960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.328 [2024-11-18 08:13:09.369985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.328 [2024-11-18 08:13:09.386176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.328 [2024-11-18 08:13:09.386202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.328 [2024-11-18 08:13:09.395998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.328 [2024-11-18 08:13:09.396026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.328 [2024-11-18 08:13:09.407853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.328 [2024-11-18 08:13:09.407895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.587 [2024-11-18 08:13:09.418479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.587 [2024-11-18 08:13:09.418515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.587 [2024-11-18 08:13:09.434107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.587 [2024-11-18 08:13:09.434156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.587 [2024-11-18 08:13:09.443865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.587 [2024-11-18 08:13:09.443890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.587 [2024-11-18 08:13:09.455719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.587 [2024-11-18 08:13:09.455759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.587 [2024-11-18 08:13:09.466392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.587 [2024-11-18 08:13:09.466417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.587 [2024-11-18 08:13:09.482581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.587 [2024-11-18 08:13:09.482608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.587 [2024-11-18 08:13:09.492376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.587 [2024-11-18 08:13:09.492401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.587 [2024-11-18 08:13:09.504137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.587 [2024-11-18 08:13:09.504161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.587 [2024-11-18 08:13:09.514805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.587 [2024-11-18 08:13:09.514829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.588 [2024-11-18 08:13:09.528852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.588 [2024-11-18 08:13:09.528880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.588 [2024-11-18 08:13:09.538422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.588 [2024-11-18 08:13:09.538449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.588 [2024-11-18 08:13:09.550442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.588 [2024-11-18 08:13:09.550484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.588 [2024-11-18 08:13:09.567292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.588 [2024-11-18 08:13:09.567318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.588 [2024-11-18 08:13:09.577116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.588 [2024-11-18 08:13:09.577145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.588 11731.33 IOPS, 91.65 MiB/s [2024-11-18T07:13:09.676Z] [2024-11-18 08:13:09.589107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.588 [2024-11-18 08:13:09.589134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.588 [2024-11-18 08:13:09.599886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.588 [2024-11-18 08:13:09.599913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.588 [2024-11-18 08:13:09.610727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.588 [2024-11-18 08:13:09.610753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.588 [2024-11-18 08:13:09.626145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.588 [2024-11-18 08:13:09.626173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.588 [2024-11-18 08:13:09.635823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.588 [2024-11-18 08:13:09.635849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.588 [2024-11-18 08:13:09.647841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.588 [2024-11-18 08:13:09.647870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.588 [2024-11-18 08:13:09.658523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.588 [2024-11-18 08:13:09.658580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.588 [2024-11-18 08:13:09.674298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.588 [2024-11-18 08:13:09.674344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.847 [2024-11-18 08:13:09.684351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.847 [2024-11-18 08:13:09.684378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.847 [2024-11-18 08:13:09.696392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.847 [2024-11-18 08:13:09.696420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.847 [2024-11-18 08:13:09.707484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.847 [2024-11-18 08:13:09.707536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.847 [2024-11-18 08:13:09.721503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.847 [2024-11-18 08:13:09.721530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.847 [2024-11-18 08:13:09.731155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.847 [2024-11-18 08:13:09.731181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.847 [2024-11-18 08:13:09.744941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.847 [2024-11-18 08:13:09.744980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.847 [2024-11-18 08:13:09.754156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.847 [2024-11-18 08:13:09.754182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.847 [2024-11-18 08:13:09.770171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.847 [2024-11-18 08:13:09.770197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.847 [2024-11-18 08:13:09.779361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.847 [2024-11-18 08:13:09.779385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.847 [2024-11-18 08:13:09.791512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.847 [2024-11-18 08:13:09.791556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.847 [2024-11-18 08:13:09.806681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.847 [2024-11-18 08:13:09.806710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.847 [2024-11-18 08:13:09.816465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.847 [2024-11-18 08:13:09.816517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.847 [2024-11-18 08:13:09.828421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.847 [2024-11-18 08:13:09.828449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.847 [2024-11-18 08:13:09.839540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.847 [2024-11-18 08:13:09.839565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.847 [2024-11-18 08:13:09.850536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.847 [2024-11-18 08:13:09.850577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.847 [2024-11-18 08:13:09.865790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.847 [2024-11-18 08:13:09.865817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.847 [2024-11-18 08:13:09.875027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.847 [2024-11-18 08:13:09.875053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.847 [2024-11-18 08:13:09.889154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.847 [2024-11-18 08:13:09.889189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.847 [2024-11-18 08:13:09.898750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.847 [2024-11-18 08:13:09.898792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.847 [2024-11-18 08:13:09.913006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.847 [2024-11-18 08:13:09.913032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.847 [2024-11-18 08:13:09.922479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.847 [2024-11-18 08:13:09.922527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:09.937263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:09.937288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:09.947075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:09.947100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:09.960711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:09.960738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:09.970024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:09.970051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:09.981800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:09.981825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:09.998322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:09.998362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:10.010864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:10.010908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:10.025446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:10.025497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:10.036627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:10.036653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:10.048293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:10.048335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:10.059084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:10.059110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:10.072448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:10.072476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:10.082208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:10.082250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:10.094065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:10.094105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:10.109394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:10.109420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:10.119098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:10.119122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:10.132994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:10.133018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:10.142344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:10.142371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:10.154247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:10.154274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:10.168913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:10.168940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:10.178567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:10.178594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.107 [2024-11-18 08:13:10.192822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.107 [2024-11-18 08:13:10.192847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.366 [2024-11-18 08:13:10.202320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.366 [2024-11-18 08:13:10.202347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.366 [2024-11-18 08:13:10.213992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.366 [2024-11-18 08:13:10.214018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.366 [2024-11-18 08:13:10.224764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.366 [2024-11-18 08:13:10.224808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.366 [2024-11-18 08:13:10.235625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.366 [2024-11-18 08:13:10.235652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.366 [2024-11-18 08:13:10.250600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.366 [2024-11-18 08:13:10.250627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.366 [2024-11-18 08:13:10.260030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.366 [2024-11-18 08:13:10.260069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.366 [2024-11-18 08:13:10.271478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.366 [2024-11-18 08:13:10.271526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.366 [2024-11-18 08:13:10.282784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.366 [2024-11-18 08:13:10.282809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.366 [2024-11-18 08:13:10.298008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.366 [2024-11-18 08:13:10.298034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.366 [2024-11-18 08:13:10.307475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.366 [2024-11-18 08:13:10.307513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.366 [2024-11-18 08:13:10.319332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.366 [2024-11-18 08:13:10.319357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.366 [2024-11-18 08:13:10.334442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.366 [2024-11-18 08:13:10.334468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.366 [2024-11-18 08:13:10.351914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.366 [2024-11-18 08:13:10.351940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.366 [2024-11-18 08:13:10.361821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.366 [2024-11-18 08:13:10.361846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.366 [2024-11-18 08:13:10.374025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.366 [2024-11-18 08:13:10.374051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.366 [2024-11-18 08:13:10.385032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.366 [2024-11-18 08:13:10.385056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.366 [2024-11-18 08:13:10.395918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.366 [2024-11-18 08:13:10.395941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.366 [2024-11-18 08:13:10.406796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.366 [2024-11-18 08:13:10.406823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.366 [2024-11-18 08:13:10.422827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.366 [2024-11-18 08:13:10.422867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.366 [2024-11-18 08:13:10.438254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.366 [2024-11-18 08:13:10.438294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 [2024-11-18 08:13:10.455563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.455590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 [2024-11-18 08:13:10.468449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.468476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 [2024-11-18 08:13:10.477783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.477808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 [2024-11-18 08:13:10.493704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.493748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 [2024-11-18 08:13:10.503435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.503459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 [2024-11-18 08:13:10.515065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.515091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 [2024-11-18 08:13:10.527462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.527502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 [2024-11-18 08:13:10.541701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.541729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 [2024-11-18 08:13:10.550740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.550767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 [2024-11-18 08:13:10.564337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.564364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 [2024-11-18 08:13:10.573443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.573468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 [2024-11-18 08:13:10.585142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.585167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 11711.00 IOPS, 91.49 MiB/s [2024-11-18T07:13:10.713Z] [2024-11-18 08:13:10.595850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.595875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 [2024-11-18 08:13:10.608126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.608155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 [2024-11-18 08:13:10.618060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.618099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 [2024-11-18 08:13:10.629844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.629870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 [2024-11-18 08:13:10.645766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.645808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 [2024-11-18 08:13:10.654725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.654752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 [2024-11-18 08:13:10.669435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.669460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 [2024-11-18 08:13:10.679594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.679622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 [2024-11-18 08:13:10.691092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.691119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.625 [2024-11-18 08:13:10.705168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.625 [2024-11-18 08:13:10.705195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.900 [2024-11-18 08:13:10.714511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.900 [2024-11-18 08:13:10.714538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.900 [2024-11-18 08:13:10.728939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.900 [2024-11-18 08:13:10.728964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.900 [2024-11-18 08:13:10.739594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.900 [2024-11-18 08:13:10.739622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.900 [2024-11-18 08:13:10.751041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.900 [2024-11-18 08:13:10.751066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.900 [2024-11-18 08:13:10.766070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.900 [2024-11-18 08:13:10.766097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.900 [2024-11-18 08:13:10.775809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.900 [2024-11-18 08:13:10.775835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.900 [2024-11-18 08:13:10.787735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.900 [2024-11-18 08:13:10.787762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.900 [2024-11-18 08:13:10.798051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.900 [2024-11-18 08:13:10.798084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.900 [2024-11-18 08:13:10.814294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.900 [2024-11-18 08:13:10.814321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.900 [2024-11-18 08:13:10.823941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.900 [2024-11-18 08:13:10.823966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.901 [2024-11-18 08:13:10.835730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.901 [2024-11-18 08:13:10.835757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.901 [2024-11-18 08:13:10.849332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.901 [2024-11-18 08:13:10.849360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.901 [2024-11-18 08:13:10.858804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.901 [2024-11-18 08:13:10.858846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.901 [2024-11-18 08:13:10.870516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.901 [2024-11-18 08:13:10.870556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.901 [2024-11-18 08:13:10.885203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.901 [2024-11-18 08:13:10.885229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.901 [2024-11-18 08:13:10.894556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.901 [2024-11-18 08:13:10.894583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.901 [2024-11-18 08:13:10.909350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.901 [2024-11-18 08:13:10.909388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.901 [2024-11-18 08:13:10.920035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.901 [2024-11-18 08:13:10.920059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.901 [2024-11-18 08:13:10.931317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.901 [2024-11-18 08:13:10.931342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.901 [2024-11-18 08:13:10.945961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.901 [2024-11-18 08:13:10.945988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.901 [2024-11-18 08:13:10.955246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.901 [2024-11-18 08:13:10.955271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.901 [2024-11-18 08:13:10.969009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.901 [2024-11-18 08:13:10.969051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.901 [2024-11-18 08:13:10.978418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.901 [2024-11-18 08:13:10.978445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.165 [2024-11-18 08:13:10.993190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.165 [2024-11-18 08:13:10.993216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.165 [2024-11-18 08:13:11.003164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.165 [2024-11-18 08:13:11.003187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.165 [2024-11-18 08:13:11.016733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.165 [2024-11-18 08:13:11.016762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.165 [2024-11-18 08:13:11.026128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.165 [2024-11-18 08:13:11.026160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.165 [2024-11-18 08:13:11.042110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.165 [2024-11-18 08:13:11.042136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.165 [2024-11-18 08:13:11.051703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.165 [2024-11-18 08:13:11.051729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.165 [2024-11-18 08:13:11.063143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.165 [2024-11-18 08:13:11.063168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.165 [2024-11-18 08:13:11.078480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.165 [2024-11-18 08:13:11.078530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.165 [2024-11-18 08:13:11.087614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.165 [2024-11-18 08:13:11.087640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.165 [2024-11-18 08:13:11.099626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.166 [2024-11-18 08:13:11.099652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.166 [2024-11-18 08:13:11.110632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.166 [2024-11-18 08:13:11.110657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.166 [2024-11-18 08:13:11.123974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.166 [2024-11-18 08:13:11.124000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.166 [2024-11-18 08:13:11.137856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.166 [2024-11-18 08:13:11.137882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.166 [2024-11-18 08:13:11.147297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.166 [2024-11-18 08:13:11.147321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.166 [2024-11-18 08:13:11.161014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.166 [2024-11-18 08:13:11.161038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.166 [2024-11-18 08:13:11.170540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.166 [2024-11-18 08:13:11.170565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.166 [2024-11-18 08:13:11.184778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.166 [2024-11-18 08:13:11.184803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.166 [2024-11-18 08:13:11.195369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.166 [2024-11-18 08:13:11.195393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.166 [2024-11-18 08:13:11.207838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.166 [2024-11-18 08:13:11.207865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.166 [2024-11-18 08:13:11.217585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.166 [2024-11-18 08:13:11.217611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.166 [2024-11-18 08:13:11.233531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.166 [2024-11-18 08:13:11.233557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.166 [2024-11-18 08:13:11.243870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.166 [2024-11-18 08:13:11.243894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.254448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.254504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.270456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.270503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.280331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.280357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.292566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.292592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.303759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.303784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.314712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.314738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.330863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.330888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.346618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.346645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.362077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.362102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.371374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.371399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.383228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.383252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.397538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.397564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.407517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.407547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.419628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.419654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.430623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.430648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.444811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.444850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.454220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.454243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.465906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.465930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.476664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.476703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.487505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.487538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.501618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.501645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.426 [2024-11-18 08:13:11.511245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.426 [2024-11-18 08:13:11.511271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.686 [2024-11-18 08:13:11.525523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.686 [2024-11-18 08:13:11.525550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.686 [2024-11-18 08:13:11.534933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.686 [2024-11-18 08:13:11.534972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.686 [2024-11-18 08:13:11.548843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.686 [2024-11-18 08:13:11.548868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.686 [2024-11-18 08:13:11.558576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.686 [2024-11-18 08:13:11.558603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.686 [2024-11-18 08:13:11.573339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.686 [2024-11-18 08:13:11.573364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.686 [2024-11-18 08:13:11.583224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.686 [2024-11-18 08:13:11.583263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.686 11707.20 IOPS, 91.46 MiB/s [2024-11-18T07:13:11.774Z] [2024-11-18 08:13:11.593366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.686 [2024-11-18 08:13:11.593393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.686 00:39:18.686 Latency(us) 00:39:18.686 [2024-11-18T07:13:11.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:18.686 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:18.686 Nvme1n1 : 5.01 11714.57 91.52 0.00 0.00 10913.75 2888.44 21359.88 00:39:18.686 [2024-11-18T07:13:11.774Z] =================================================================================================================== 00:39:18.686 [2024-11-18T07:13:11.774Z] Total : 11714.57 91.52 0.00 0.00 10913.75 2888.44 21359.88 00:39:18.687 [2024-11-18 08:13:11.600381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.600407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.687 [2024-11-18 08:13:11.608233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.608257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.687 [2024-11-18 08:13:11.616256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.616300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.687 [2024-11-18 08:13:11.624267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.624319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.687 [2024-11-18 08:13:11.632262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.632312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.687 [2024-11-18 08:13:11.640265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.640314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.687 [2024-11-18 08:13:11.648254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.648299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.687 [2024-11-18 08:13:11.656260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.656310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.687 [2024-11-18 08:13:11.664261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.664310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.687 [2024-11-18 08:13:11.672256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.672304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.687 [2024-11-18 08:13:11.680260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.680307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.687 [2024-11-18 08:13:11.688261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.688309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.687 [2024-11-18 08:13:11.696266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.696313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.687 [2024-11-18 08:13:11.704264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.704309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.687 [2024-11-18 08:13:11.712263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.712312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.687 [2024-11-18 08:13:11.720258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.720305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.687 [2024-11-18 08:13:11.728253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.728298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.687 [2024-11-18 08:13:11.736229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.736267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.687 [2024-11-18 08:13:11.744212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.744233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.687 [2024-11-18 08:13:11.752257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.752301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.687 [2024-11-18 08:13:11.760254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.760298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.687 [2024-11-18 08:13:11.768258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.687 [2024-11-18 08:13:11.768302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.948 [2024-11-18 08:13:11.776203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.948 [2024-11-18 08:13:11.776222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.948 [2024-11-18 08:13:11.784204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.948 [2024-11-18 08:13:11.784222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.948 [2024-11-18 08:13:11.792203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.949 [2024-11-18 08:13:11.792223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (934008) - No such process 00:39:18.949 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 934008 00:39:18.949 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:18.949 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.949 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:18.949 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.949 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:18.949 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.949 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:18.949 delay0 00:39:18.949 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.949 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:18.949 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.949 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:18.949 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.949 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:18.949 [2024-11-18 08:13:11.912928] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:27.078 [2024-11-18 08:13:18.985487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601940 is same with the state(6) to be set 00:39:27.078 Initializing NVMe Controllers 00:39:27.078 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:27.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:27.078 Initialization complete. Launching workers. 00:39:27.078 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 219, failed: 25288 00:39:27.078 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 25357, failed to submit 150 00:39:27.078 success 25290, unsuccessful 67, failed 0 00:39:27.078 08:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:39:27.078 08:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:39:27.078 08:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:27.078 08:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:39:27.078 08:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:27.078 08:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:39:27.078 08:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:27.078 08:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:27.078 rmmod nvme_tcp 00:39:27.078 rmmod nvme_fabrics 00:39:27.078 rmmod nvme_keyring 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 932687 ']' 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 932687 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 932687 ']' 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 932687 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 932687 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 932687' 00:39:27.078 killing process with pid 932687 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 932687 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 932687 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:27.078 08:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:28.461 00:39:28.461 real 0m28.595s 00:39:28.461 user 0m40.170s 00:39:28.461 sys 0m10.156s 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:28.461 ************************************ 00:39:28.461 END TEST nvmf_zcopy 00:39:28.461 ************************************ 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:28.461 ************************************ 00:39:28.461 START TEST nvmf_nmic 00:39:28.461 ************************************ 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:28.461 * Looking for test storage... 00:39:28.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:28.461 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:28.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.462 --rc genhtml_branch_coverage=1 00:39:28.462 --rc genhtml_function_coverage=1 00:39:28.462 --rc genhtml_legend=1 00:39:28.462 --rc geninfo_all_blocks=1 00:39:28.462 --rc geninfo_unexecuted_blocks=1 00:39:28.462 00:39:28.462 ' 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:28.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.462 --rc genhtml_branch_coverage=1 00:39:28.462 --rc genhtml_function_coverage=1 00:39:28.462 --rc genhtml_legend=1 00:39:28.462 --rc geninfo_all_blocks=1 00:39:28.462 --rc geninfo_unexecuted_blocks=1 00:39:28.462 00:39:28.462 ' 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:28.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.462 --rc genhtml_branch_coverage=1 00:39:28.462 --rc genhtml_function_coverage=1 00:39:28.462 --rc genhtml_legend=1 00:39:28.462 --rc geninfo_all_blocks=1 00:39:28.462 --rc geninfo_unexecuted_blocks=1 00:39:28.462 00:39:28.462 ' 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:28.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.462 --rc genhtml_branch_coverage=1 00:39:28.462 --rc genhtml_function_coverage=1 00:39:28.462 --rc genhtml_legend=1 00:39:28.462 --rc geninfo_all_blocks=1 00:39:28.462 --rc geninfo_unexecuted_blocks=1 00:39:28.462 00:39:28.462 ' 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:28.462 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:28.721 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:28.721 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:28.721 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:28.721 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:28.721 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:28.721 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:28.721 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:28.721 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:28.721 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:28.721 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:28.722 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:28.722 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:28.722 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:28.722 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:39:28.722 08:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:30.627 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:30.627 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:39:30.627 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:30.627 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:30.627 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:30.627 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:30.627 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:30.627 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:39:30.627 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:30.628 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:30.628 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:30.628 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:30.628 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:30.628 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:30.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:30.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:39:30.887 00:39:30.887 --- 10.0.0.2 ping statistics --- 00:39:30.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:30.887 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:30.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:30.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:39:30.887 00:39:30.887 --- 10.0.0.1 ping statistics --- 00:39:30.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:30.887 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=937391 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 937391 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 937391 ']' 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:30.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:30.887 08:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:30.887 [2024-11-18 08:13:23.893931] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:30.887 [2024-11-18 08:13:23.895020] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:39:30.887 [2024-11-18 08:13:23.895080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:30.887 [2024-11-18 08:13:23.972779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:31.148 [2024-11-18 08:13:24.022860] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:31.148 [2024-11-18 08:13:24.022918] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:31.148 [2024-11-18 08:13:24.022934] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:31.148 [2024-11-18 08:13:24.022946] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:31.148 [2024-11-18 08:13:24.022956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:31.148 [2024-11-18 08:13:24.024530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:31.148 [2024-11-18 08:13:24.024557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:31.148 [2024-11-18 08:13:24.024615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:31.148 [2024-11-18 08:13:24.024619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:31.148 [2024-11-18 08:13:24.107426] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:31.148 [2024-11-18 08:13:24.107655] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:31.148 [2024-11-18 08:13:24.107878] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:31.148 [2024-11-18 08:13:24.108421] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:31.148 [2024-11-18 08:13:24.108682] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:31.148 [2024-11-18 08:13:24.165317] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:31.148 Malloc0 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:31.148 [2024-11-18 08:13:24.225570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:31.148 test case1: single bdev can't be used in multiple subsystems 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.148 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:31.408 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.408 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:31.408 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.408 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:31.408 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.408 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:31.408 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:31.408 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.408 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:31.408 [2024-11-18 08:13:24.249277] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:31.408 [2024-11-18 08:13:24.249308] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:31.408 [2024-11-18 08:13:24.249323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.408 request: 00:39:31.408 { 00:39:31.408 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:31.408 "namespace": { 00:39:31.408 "bdev_name": "Malloc0", 00:39:31.408 "no_auto_visible": false 00:39:31.408 }, 00:39:31.408 "method": "nvmf_subsystem_add_ns", 00:39:31.408 "req_id": 1 00:39:31.408 } 00:39:31.408 Got JSON-RPC error response 00:39:31.408 response: 00:39:31.408 { 00:39:31.408 "code": -32602, 00:39:31.408 "message": "Invalid parameters" 00:39:31.408 } 00:39:31.408 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:31.408 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:31.408 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:31.408 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:31.408 Adding namespace failed - expected result. 00:39:31.408 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:31.408 test case2: host connect to nvmf target in multiple paths 00:39:31.408 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:31.408 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.408 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:31.408 [2024-11-18 08:13:24.257373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:31.408 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.408 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:31.408 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:31.667 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:31.667 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:39:31.667 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:31.667 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:31.667 08:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:39:34.204 08:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:34.204 08:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:34.204 08:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:34.204 08:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:34.204 08:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:34.204 08:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:39:34.204 08:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:34.204 [global] 00:39:34.204 thread=1 00:39:34.204 invalidate=1 00:39:34.204 rw=write 00:39:34.204 time_based=1 00:39:34.204 runtime=1 00:39:34.204 ioengine=libaio 00:39:34.204 direct=1 00:39:34.204 bs=4096 00:39:34.204 iodepth=1 00:39:34.204 norandommap=0 00:39:34.204 numjobs=1 00:39:34.204 00:39:34.204 verify_dump=1 00:39:34.204 verify_backlog=512 00:39:34.204 verify_state_save=0 00:39:34.204 do_verify=1 00:39:34.204 verify=crc32c-intel 00:39:34.204 [job0] 00:39:34.204 filename=/dev/nvme0n1 00:39:34.204 Could not set queue depth (nvme0n1) 00:39:34.204 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:34.204 fio-3.35 00:39:34.204 Starting 1 thread 00:39:35.142 00:39:35.142 job0: (groupid=0, jobs=1): err= 0: pid=937888: Mon Nov 18 08:13:28 2024 00:39:35.142 read: IOPS=21, BW=84.8KiB/s (86.8kB/s)(88.0KiB/1038msec) 00:39:35.142 slat (nsec): min=8100, max=14423, avg=13807.23, stdev=1284.34 00:39:35.142 clat (usec): min=40972, max=42253, avg=41899.05, stdev=301.19 00:39:35.142 lat (usec): min=40986, max=42261, avg=41912.85, stdev=300.87 00:39:35.142 clat percentiles (usec): 00:39:35.142 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[42206], 00:39:35.142 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:39:35.142 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:35.142 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:35.142 | 99.99th=[42206] 00:39:35.142 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:39:35.142 slat (usec): min=8, max=28454, avg=65.64, stdev=1257.08 00:39:35.142 clat (usec): min=144, max=262, avg=157.93, stdev=10.50 00:39:35.142 lat (usec): min=153, max=28659, avg=223.57, stdev=1259.22 00:39:35.142 clat percentiles (usec): 00:39:35.142 | 1.00th=[ 147], 5.00th=[ 149], 10.00th=[ 149], 20.00th=[ 151], 00:39:35.142 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 155], 60.00th=[ 157], 00:39:35.142 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 169], 95.00th=[ 174], 00:39:35.142 | 99.00th=[ 202], 99.50th=[ 221], 99.90th=[ 265], 99.95th=[ 265], 00:39:35.142 | 99.99th=[ 265] 00:39:35.142 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:39:35.142 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:35.142 lat (usec) : 250=95.69%, 500=0.19% 00:39:35.142 lat (msec) : 50=4.12% 00:39:35.142 cpu : usr=0.58%, sys=0.39%, ctx=538, majf=0, minf=1 00:39:35.142 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:35.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.142 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.142 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:35.142 00:39:35.142 Run status group 0 (all jobs): 00:39:35.142 READ: bw=84.8KiB/s (86.8kB/s), 84.8KiB/s-84.8KiB/s (86.8kB/s-86.8kB/s), io=88.0KiB (90.1kB), run=1038-1038msec 00:39:35.142 WRITE: bw=1973KiB/s (2020kB/s), 1973KiB/s-1973KiB/s (2020kB/s-2020kB/s), io=2048KiB (2097kB), run=1038-1038msec 00:39:35.142 00:39:35.142 Disk stats (read/write): 00:39:35.142 nvme0n1: ios=43/512, merge=0/0, ticks=1712/81, in_queue=1793, util=98.60% 00:39:35.142 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:35.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:35.400 rmmod nvme_tcp 00:39:35.400 rmmod nvme_fabrics 00:39:35.400 rmmod nvme_keyring 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 937391 ']' 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 937391 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 937391 ']' 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 937391 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 937391 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 937391' 00:39:35.400 killing process with pid 937391 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 937391 00:39:35.400 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 937391 00:39:35.658 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:35.658 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:35.658 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:35.658 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:39:35.658 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:39:35.658 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:35.658 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:39:35.658 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:35.658 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:35.658 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:35.658 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:35.658 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.574 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:37.574 00:39:37.574 real 0m9.227s 00:39:37.574 user 0m17.441s 00:39:37.574 sys 0m3.189s 00:39:37.574 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:37.574 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:37.574 ************************************ 00:39:37.574 END TEST nvmf_nmic 00:39:37.574 ************************************ 00:39:37.574 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:37.574 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:37.574 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:37.574 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:37.866 ************************************ 00:39:37.866 START TEST nvmf_fio_target 00:39:37.866 ************************************ 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:37.866 * Looking for test storage... 00:39:37.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:37.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.866 --rc genhtml_branch_coverage=1 00:39:37.866 --rc genhtml_function_coverage=1 00:39:37.866 --rc genhtml_legend=1 00:39:37.866 --rc geninfo_all_blocks=1 00:39:37.866 --rc geninfo_unexecuted_blocks=1 00:39:37.866 00:39:37.866 ' 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:37.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.866 --rc genhtml_branch_coverage=1 00:39:37.866 --rc genhtml_function_coverage=1 00:39:37.866 --rc genhtml_legend=1 00:39:37.866 --rc geninfo_all_blocks=1 00:39:37.866 --rc geninfo_unexecuted_blocks=1 00:39:37.866 00:39:37.866 ' 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:37.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.866 --rc genhtml_branch_coverage=1 00:39:37.866 --rc genhtml_function_coverage=1 00:39:37.866 --rc genhtml_legend=1 00:39:37.866 --rc geninfo_all_blocks=1 00:39:37.866 --rc geninfo_unexecuted_blocks=1 00:39:37.866 00:39:37.866 ' 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:37.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.866 --rc genhtml_branch_coverage=1 00:39:37.866 --rc genhtml_function_coverage=1 00:39:37.866 --rc genhtml_legend=1 00:39:37.866 --rc geninfo_all_blocks=1 00:39:37.866 --rc geninfo_unexecuted_blocks=1 00:39:37.866 00:39:37.866 ' 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:37.866 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:39:37.867 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:40.427 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:40.427 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:40.427 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:40.428 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:40.428 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:40.428 08:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:40.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:40.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:39:40.428 00:39:40.428 --- 10.0.0.2 ping statistics --- 00:39:40.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:40.428 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:40.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:40.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:39:40.428 00:39:40.428 --- 10.0.0.1 ping statistics --- 00:39:40.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:40.428 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=940072 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 940072 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 940072 ']' 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:40.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:40.428 [2024-11-18 08:13:33.242694] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:40.428 [2024-11-18 08:13:33.243815] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:39:40.428 [2024-11-18 08:13:33.243879] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:40.428 [2024-11-18 08:13:33.314672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:40.428 [2024-11-18 08:13:33.359702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:40.428 [2024-11-18 08:13:33.359758] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:40.428 [2024-11-18 08:13:33.359782] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:40.428 [2024-11-18 08:13:33.359807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:40.428 [2024-11-18 08:13:33.359817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:40.428 [2024-11-18 08:13:33.361227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:40.428 [2024-11-18 08:13:33.361252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:40.428 [2024-11-18 08:13:33.361308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:40.428 [2024-11-18 08:13:33.361311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:40.428 [2024-11-18 08:13:33.441992] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:40.428 [2024-11-18 08:13:33.442215] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:40.428 [2024-11-18 08:13:33.442451] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:40.428 [2024-11-18 08:13:33.442984] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:40.428 [2024-11-18 08:13:33.443209] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:40.428 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:40.686 [2024-11-18 08:13:33.746024] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:40.944 08:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:41.210 08:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:41.210 08:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:41.472 08:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:41.472 08:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:41.731 08:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:41.731 08:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:41.991 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:41.991 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:42.249 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:42.507 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:42.507 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:43.073 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:43.073 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:43.331 08:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:43.331 08:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:43.590 08:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:43.849 08:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:43.849 08:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:44.108 08:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:44.108 08:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:44.365 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:44.621 [2024-11-18 08:13:37.526159] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:44.621 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:39:44.880 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:39:45.138 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:45.397 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:39:45.397 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:39:45.397 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:45.397 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:39:45.397 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:39:45.397 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:39:47.304 08:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:47.304 08:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:47.304 08:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:47.304 08:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:39:47.304 08:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:47.304 08:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:39:47.304 08:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:47.304 [global] 00:39:47.304 thread=1 00:39:47.304 invalidate=1 00:39:47.304 rw=write 00:39:47.304 time_based=1 00:39:47.304 runtime=1 00:39:47.304 ioengine=libaio 00:39:47.304 direct=1 00:39:47.304 bs=4096 00:39:47.304 iodepth=1 00:39:47.304 norandommap=0 00:39:47.304 numjobs=1 00:39:47.304 00:39:47.304 verify_dump=1 00:39:47.304 verify_backlog=512 00:39:47.304 verify_state_save=0 00:39:47.304 do_verify=1 00:39:47.304 verify=crc32c-intel 00:39:47.304 [job0] 00:39:47.304 filename=/dev/nvme0n1 00:39:47.304 [job1] 00:39:47.304 filename=/dev/nvme0n2 00:39:47.304 [job2] 00:39:47.304 filename=/dev/nvme0n3 00:39:47.304 [job3] 00:39:47.304 filename=/dev/nvme0n4 00:39:47.562 Could not set queue depth (nvme0n1) 00:39:47.562 Could not set queue depth (nvme0n2) 00:39:47.563 Could not set queue depth (nvme0n3) 00:39:47.563 Could not set queue depth (nvme0n4) 00:39:47.563 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:47.563 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:47.563 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:47.563 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:47.563 fio-3.35 00:39:47.563 Starting 4 threads 00:39:48.942 00:39:48.942 job0: (groupid=0, jobs=1): err= 0: pid=941026: Mon Nov 18 08:13:41 2024 00:39:48.942 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:39:48.942 slat (nsec): min=4401, max=34054, avg=9854.59, stdev=5036.16 00:39:48.942 clat (usec): min=196, max=1240, avg=232.62, stdev=44.06 00:39:48.942 lat (usec): min=202, max=1246, avg=242.48, stdev=46.23 00:39:48.942 clat percentiles (usec): 00:39:48.942 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:39:48.942 | 30.00th=[ 219], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 227], 00:39:48.942 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 273], 00:39:48.942 | 99.00th=[ 461], 99.50th=[ 478], 99.90th=[ 545], 99.95th=[ 553], 00:39:48.942 | 99.99th=[ 1237] 00:39:48.942 write: IOPS=2513, BW=9.82MiB/s (10.3MB/s)(9.83MiB/1001msec); 0 zone resets 00:39:48.942 slat (nsec): min=5784, max=70285, avg=13404.67, stdev=7943.76 00:39:48.942 clat (usec): min=137, max=483, avg=180.35, stdev=44.01 00:39:48.942 lat (usec): min=145, max=491, avg=193.75, stdev=47.54 00:39:48.942 clat percentiles (usec): 00:39:48.942 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:39:48.942 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:39:48.942 | 70.00th=[ 176], 80.00th=[ 196], 90.00th=[ 243], 95.00th=[ 289], 00:39:48.942 | 99.00th=[ 334], 99.50th=[ 343], 99.90th=[ 379], 99.95th=[ 396], 00:39:48.942 | 99.99th=[ 486] 00:39:48.942 bw ( KiB/s): min=11088, max=11088, per=50.60%, avg=11088.00, stdev= 0.00, samples=1 00:39:48.942 iops : min= 2772, max= 2772, avg=2772.00, stdev= 0.00, samples=1 00:39:48.942 lat (usec) : 250=91.24%, 500=8.68%, 750=0.07% 00:39:48.942 lat (msec) : 2=0.02% 00:39:48.942 cpu : usr=3.40%, sys=5.00%, ctx=4566, majf=0, minf=1 00:39:48.942 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:48.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.942 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.942 issued rwts: total=2048,2516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:48.942 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:48.942 job1: (groupid=0, jobs=1): err= 0: pid=941027: Mon Nov 18 08:13:41 2024 00:39:48.942 read: IOPS=75, BW=302KiB/s (309kB/s)(308KiB/1020msec) 00:39:48.942 slat (nsec): min=7144, max=34632, avg=15123.00, stdev=8739.15 00:39:48.942 clat (usec): min=222, max=40990, avg=11521.17, stdev=18167.94 00:39:48.942 lat (usec): min=230, max=41005, avg=11536.29, stdev=18173.66 00:39:48.942 clat percentiles (usec): 00:39:48.942 | 1.00th=[ 223], 5.00th=[ 243], 10.00th=[ 262], 20.00th=[ 281], 00:39:48.942 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 330], 60.00th=[ 392], 00:39:48.942 | 70.00th=[ 441], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:48.942 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:48.942 | 99.99th=[41157] 00:39:48.942 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:39:48.942 slat (nsec): min=6495, max=23556, avg=7945.87, stdev=2205.68 00:39:48.942 clat (usec): min=165, max=462, avg=246.26, stdev=28.19 00:39:48.942 lat (usec): min=173, max=469, avg=254.20, stdev=27.98 00:39:48.942 clat percentiles (usec): 00:39:48.942 | 1.00th=[ 176], 5.00th=[ 223], 10.00th=[ 233], 20.00th=[ 239], 00:39:48.942 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 247], 00:39:48.942 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 269], 00:39:48.942 | 99.00th=[ 392], 99.50th=[ 408], 99.90th=[ 461], 99.95th=[ 461], 00:39:48.942 | 99.99th=[ 461] 00:39:48.942 bw ( KiB/s): min= 4096, max= 4096, per=18.69%, avg=4096.00, stdev= 0.00, samples=1 00:39:48.942 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:48.942 lat (usec) : 250=73.51%, 500=22.75% 00:39:48.942 lat (msec) : 10=0.17%, 50=3.57% 00:39:48.942 cpu : usr=0.10%, sys=0.69%, ctx=589, majf=0, minf=2 00:39:48.942 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:48.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.942 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.942 issued rwts: total=77,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:48.942 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:48.942 job2: (groupid=0, jobs=1): err= 0: pid=941028: Mon Nov 18 08:13:41 2024 00:39:48.942 read: IOPS=20, BW=83.9KiB/s (85.9kB/s)(84.0KiB/1001msec) 00:39:48.942 slat (nsec): min=6675, max=41809, avg=25590.81, stdev=10837.69 00:39:48.942 clat (usec): min=40883, max=42048, avg=41022.85, stdev=238.63 00:39:48.942 lat (usec): min=40924, max=42082, avg=41048.44, stdev=239.56 00:39:48.942 clat percentiles (usec): 00:39:48.942 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:48.942 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:48.942 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:48.943 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:48.943 | 99.99th=[42206] 00:39:48.943 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:39:48.943 slat (nsec): min=6351, max=26941, avg=9201.39, stdev=3151.89 00:39:48.943 clat (usec): min=166, max=500, avg=248.96, stdev=42.06 00:39:48.943 lat (usec): min=174, max=508, avg=258.16, stdev=42.82 00:39:48.943 clat percentiles (usec): 00:39:48.943 | 1.00th=[ 176], 5.00th=[ 192], 10.00th=[ 221], 20.00th=[ 231], 00:39:48.943 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 247], 00:39:48.943 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 338], 00:39:48.943 | 99.00th=[ 416], 99.50th=[ 465], 99.90th=[ 502], 99.95th=[ 502], 00:39:48.943 | 99.99th=[ 502] 00:39:48.943 bw ( KiB/s): min= 4096, max= 4096, per=18.69%, avg=4096.00, stdev= 0.00, samples=1 00:39:48.943 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:48.943 lat (usec) : 250=66.60%, 500=29.27%, 750=0.19% 00:39:48.943 lat (msec) : 50=3.94% 00:39:48.943 cpu : usr=0.20%, sys=0.50%, ctx=535, majf=0, minf=1 00:39:48.943 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:48.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.943 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:48.943 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:48.943 job3: (groupid=0, jobs=1): err= 0: pid=941029: Mon Nov 18 08:13:41 2024 00:39:48.943 read: IOPS=1761, BW=7046KiB/s (7215kB/s)(7060KiB/1002msec) 00:39:48.943 slat (nsec): min=5664, max=43376, avg=12289.88, stdev=5231.86 00:39:48.943 clat (usec): min=216, max=40625, avg=299.95, stdev=961.65 00:39:48.943 lat (usec): min=223, max=40634, avg=312.24, stdev=961.69 00:39:48.943 clat percentiles (usec): 00:39:48.943 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 245], 00:39:48.943 | 30.00th=[ 251], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 273], 00:39:48.943 | 70.00th=[ 289], 80.00th=[ 318], 90.00th=[ 322], 95.00th=[ 330], 00:39:48.943 | 99.00th=[ 420], 99.50th=[ 545], 99.90th=[ 1123], 99.95th=[40633], 00:39:48.943 | 99.99th=[40633] 00:39:48.943 write: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec); 0 zone resets 00:39:48.943 slat (nsec): min=7120, max=42985, avg=15211.78, stdev=6583.66 00:39:48.943 clat (usec): min=158, max=436, avg=196.61, stdev=19.72 00:39:48.943 lat (usec): min=166, max=467, avg=211.82, stdev=22.61 00:39:48.943 clat percentiles (usec): 00:39:48.943 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 182], 00:39:48.943 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 200], 00:39:48.943 | 70.00th=[ 204], 80.00th=[ 208], 90.00th=[ 212], 95.00th=[ 219], 00:39:48.943 | 99.00th=[ 269], 99.50th=[ 293], 99.90th=[ 392], 99.95th=[ 400], 00:39:48.943 | 99.99th=[ 437] 00:39:48.943 bw ( KiB/s): min= 8192, max= 8192, per=37.38%, avg=8192.00, stdev= 0.00, samples=2 00:39:48.943 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:39:48.943 lat (usec) : 250=66.56%, 500=33.20%, 750=0.13%, 1000=0.05% 00:39:48.943 lat (msec) : 2=0.03%, 50=0.03% 00:39:48.943 cpu : usr=4.90%, sys=6.29%, ctx=3814, majf=0, minf=2 00:39:48.943 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:48.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.943 issued rwts: total=1765,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:48.943 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:48.943 00:39:48.943 Run status group 0 (all jobs): 00:39:48.943 READ: bw=15.0MiB/s (15.7MB/s), 83.9KiB/s-8184KiB/s (85.9kB/s-8380kB/s), io=15.3MiB (16.0MB), run=1001-1020msec 00:39:48.943 WRITE: bw=21.4MiB/s (22.4MB/s), 2008KiB/s-9.82MiB/s (2056kB/s-10.3MB/s), io=21.8MiB (22.9MB), run=1001-1020msec 00:39:48.943 00:39:48.943 Disk stats (read/write): 00:39:48.943 nvme0n1: ios=1948/2048, merge=0/0, ticks=1390/341, in_queue=1731, util=97.70% 00:39:48.943 nvme0n2: ios=85/512, merge=0/0, ticks=686/124, in_queue=810, util=86.57% 00:39:48.943 nvme0n3: ios=75/512, merge=0/0, ticks=1311/126, in_queue=1437, util=97.38% 00:39:48.943 nvme0n4: ios=1536/1719, merge=0/0, ticks=447/315, in_queue=762, util=89.66% 00:39:48.943 08:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:39:48.943 [global] 00:39:48.943 thread=1 00:39:48.943 invalidate=1 00:39:48.943 rw=randwrite 00:39:48.943 time_based=1 00:39:48.943 runtime=1 00:39:48.943 ioengine=libaio 00:39:48.943 direct=1 00:39:48.943 bs=4096 00:39:48.943 iodepth=1 00:39:48.943 norandommap=0 00:39:48.943 numjobs=1 00:39:48.943 00:39:48.943 verify_dump=1 00:39:48.943 verify_backlog=512 00:39:48.943 verify_state_save=0 00:39:48.943 do_verify=1 00:39:48.943 verify=crc32c-intel 00:39:48.943 [job0] 00:39:48.943 filename=/dev/nvme0n1 00:39:48.943 [job1] 00:39:48.943 filename=/dev/nvme0n2 00:39:48.943 [job2] 00:39:48.943 filename=/dev/nvme0n3 00:39:48.943 [job3] 00:39:48.943 filename=/dev/nvme0n4 00:39:48.943 Could not set queue depth (nvme0n1) 00:39:48.943 Could not set queue depth (nvme0n2) 00:39:48.943 Could not set queue depth (nvme0n3) 00:39:48.943 Could not set queue depth (nvme0n4) 00:39:49.202 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:49.202 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:49.202 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:49.202 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:49.202 fio-3.35 00:39:49.202 Starting 4 threads 00:39:50.579 00:39:50.579 job0: (groupid=0, jobs=1): err= 0: pid=941256: Mon Nov 18 08:13:43 2024 00:39:50.579 read: IOPS=2103, BW=8416KiB/s (8618kB/s)(8424KiB/1001msec) 00:39:50.579 slat (nsec): min=5609, max=30747, avg=6592.92, stdev=1957.00 00:39:50.579 clat (usec): min=196, max=463, avg=241.75, stdev=20.77 00:39:50.579 lat (usec): min=202, max=470, avg=248.34, stdev=20.91 00:39:50.579 clat percentiles (usec): 00:39:50.579 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 225], 00:39:50.579 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 247], 00:39:50.579 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:39:50.579 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 392], 99.95th=[ 396], 00:39:50.579 | 99.99th=[ 465] 00:39:50.579 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:39:50.579 slat (nsec): min=7026, max=30990, avg=8342.85, stdev=2273.26 00:39:50.579 clat (usec): min=146, max=989, avg=173.91, stdev=25.59 00:39:50.579 lat (usec): min=154, max=997, avg=182.25, stdev=25.76 00:39:50.579 clat percentiles (usec): 00:39:50.579 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:39:50.579 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:39:50.579 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:39:50.579 | 99.00th=[ 208], 99.50th=[ 221], 99.90th=[ 627], 99.95th=[ 848], 00:39:50.579 | 99.99th=[ 988] 00:39:50.579 bw ( KiB/s): min=10400, max=10400, per=35.07%, avg=10400.00, stdev= 0.00, samples=1 00:39:50.579 iops : min= 2600, max= 2600, avg=2600.00, stdev= 0.00, samples=1 00:39:50.579 lat (usec) : 250=85.92%, 500=14.02%, 750=0.02%, 1000=0.04% 00:39:50.579 cpu : usr=2.50%, sys=5.10%, ctx=4666, majf=0, minf=1 00:39:50.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:50.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.579 issued rwts: total=2106,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:50.579 job1: (groupid=0, jobs=1): err= 0: pid=941257: Mon Nov 18 08:13:43 2024 00:39:50.579 read: IOPS=21, BW=86.9KiB/s (89.0kB/s)(88.0KiB/1013msec) 00:39:50.579 slat (nsec): min=7742, max=21420, avg=13537.18, stdev=2162.57 00:39:50.579 clat (usec): min=363, max=41068, avg=39121.28, stdev=8657.12 00:39:50.579 lat (usec): min=371, max=41081, avg=39134.82, stdev=8658.40 00:39:50.579 clat percentiles (usec): 00:39:50.579 | 1.00th=[ 363], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:39:50.579 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:50.579 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:50.579 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:50.579 | 99.99th=[41157] 00:39:50.579 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:39:50.579 slat (nsec): min=7609, max=29788, avg=10267.32, stdev=2857.20 00:39:50.579 clat (usec): min=160, max=1224, avg=282.62, stdev=73.76 00:39:50.579 lat (usec): min=175, max=1234, avg=292.89, stdev=73.79 00:39:50.579 clat percentiles (usec): 00:39:50.579 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 208], 20.00th=[ 253], 00:39:50.579 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 285], 00:39:50.579 | 70.00th=[ 293], 80.00th=[ 318], 90.00th=[ 338], 95.00th=[ 396], 00:39:50.579 | 99.00th=[ 437], 99.50th=[ 709], 99.90th=[ 1221], 99.95th=[ 1221], 00:39:50.579 | 99.99th=[ 1221] 00:39:50.579 bw ( KiB/s): min= 4096, max= 4096, per=13.81%, avg=4096.00, stdev= 0.00, samples=1 00:39:50.579 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:50.579 lat (usec) : 250=18.16%, 500=77.34%, 750=0.19%, 1000=0.19% 00:39:50.579 lat (msec) : 2=0.19%, 50=3.93% 00:39:50.579 cpu : usr=0.30%, sys=0.69%, ctx=534, majf=0, minf=2 00:39:50.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:50.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.579 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:50.579 job2: (groupid=0, jobs=1): err= 0: pid=941259: Mon Nov 18 08:13:43 2024 00:39:50.579 read: IOPS=1704, BW=6817KiB/s (6981kB/s)(6824KiB/1001msec) 00:39:50.579 slat (nsec): min=5882, max=28952, avg=6934.42, stdev=2084.76 00:39:50.579 clat (usec): min=228, max=41644, avg=296.85, stdev=1002.50 00:39:50.579 lat (usec): min=235, max=41650, avg=303.79, stdev=1002.49 00:39:50.579 clat percentiles (usec): 00:39:50.579 | 1.00th=[ 237], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 249], 00:39:50.579 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 260], 60.00th=[ 265], 00:39:50.579 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 355], 95.00th=[ 379], 00:39:50.579 | 99.00th=[ 400], 99.50th=[ 408], 99.90th=[ 652], 99.95th=[41681], 00:39:50.579 | 99.99th=[41681] 00:39:50.579 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:39:50.579 slat (nsec): min=7405, max=31590, avg=9314.22, stdev=2913.28 00:39:50.579 clat (usec): min=171, max=1252, avg=221.81, stdev=69.00 00:39:50.579 lat (usec): min=179, max=1267, avg=231.12, stdev=70.30 00:39:50.579 clat percentiles (usec): 00:39:50.579 | 1.00th=[ 178], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 186], 00:39:50.579 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:39:50.579 | 70.00th=[ 208], 80.00th=[ 249], 90.00th=[ 314], 95.00th=[ 379], 00:39:50.579 | 99.00th=[ 461], 99.50th=[ 478], 99.90th=[ 865], 99.95th=[ 947], 00:39:50.579 | 99.99th=[ 1254] 00:39:50.579 bw ( KiB/s): min= 8192, max= 8192, per=27.62%, avg=8192.00, stdev= 0.00, samples=1 00:39:50.579 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:50.579 lat (usec) : 250=54.29%, 500=45.42%, 750=0.19%, 1000=0.05% 00:39:50.579 lat (msec) : 2=0.03%, 50=0.03% 00:39:50.579 cpu : usr=1.80%, sys=4.70%, ctx=3754, majf=0, minf=2 00:39:50.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:50.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.579 issued rwts: total=1706,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:50.579 job3: (groupid=0, jobs=1): err= 0: pid=941261: Mon Nov 18 08:13:43 2024 00:39:50.579 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:39:50.579 slat (nsec): min=6086, max=28352, avg=7274.20, stdev=2086.37 00:39:50.579 clat (usec): min=220, max=965, avg=256.49, stdev=33.74 00:39:50.579 lat (usec): min=226, max=972, avg=263.76, stdev=33.79 00:39:50.579 clat percentiles (usec): 00:39:50.579 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 243], 00:39:50.579 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:39:50.579 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 281], 00:39:50.579 | 99.00th=[ 429], 99.50th=[ 437], 99.90th=[ 498], 99.95th=[ 515], 00:39:50.579 | 99.99th=[ 963] 00:39:50.579 write: IOPS=2387, BW=9550KiB/s (9780kB/s)(9560KiB/1001msec); 0 zone resets 00:39:50.579 slat (nsec): min=7694, max=31059, avg=9133.08, stdev=2308.56 00:39:50.579 clat (usec): min=148, max=1251, avg=178.98, stdev=32.13 00:39:50.579 lat (usec): min=159, max=1265, avg=188.11, stdev=32.39 00:39:50.579 clat percentiles (usec): 00:39:50.579 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:39:50.579 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:39:50.579 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 198], 00:39:50.579 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 619], 99.95th=[ 906], 00:39:50.579 | 99.99th=[ 1254] 00:39:50.579 bw ( KiB/s): min= 9216, max= 9216, per=31.08%, avg=9216.00, stdev= 0.00, samples=1 00:39:50.579 iops : min= 2304, max= 2304, avg=2304.00, stdev= 0.00, samples=1 00:39:50.579 lat (usec) : 250=75.71%, 500=24.16%, 750=0.07%, 1000=0.05% 00:39:50.579 lat (msec) : 2=0.02% 00:39:50.580 cpu : usr=2.10%, sys=5.50%, ctx=4439, majf=0, minf=1 00:39:50.580 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:50.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.580 issued rwts: total=2048,2390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.580 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:50.580 00:39:50.580 Run status group 0 (all jobs): 00:39:50.580 READ: bw=22.7MiB/s (23.8MB/s), 86.9KiB/s-8416KiB/s (89.0kB/s-8618kB/s), io=23.0MiB (24.1MB), run=1001-1013msec 00:39:50.580 WRITE: bw=29.0MiB/s (30.4MB/s), 2022KiB/s-9.99MiB/s (2070kB/s-10.5MB/s), io=29.3MiB (30.8MB), run=1001-1013msec 00:39:50.580 00:39:50.580 Disk stats (read/write): 00:39:50.580 nvme0n1: ios=1939/2048, merge=0/0, ticks=474/339, in_queue=813, util=87.37% 00:39:50.580 nvme0n2: ios=68/512, merge=0/0, ticks=798/131, in_queue=929, util=95.74% 00:39:50.580 nvme0n3: ios=1593/1564, merge=0/0, ticks=520/341, in_queue=861, util=95.42% 00:39:50.580 nvme0n4: ios=1797/2048, merge=0/0, ticks=858/354, in_queue=1212, util=99.69% 00:39:50.580 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:50.580 [global] 00:39:50.580 thread=1 00:39:50.580 invalidate=1 00:39:50.580 rw=write 00:39:50.580 time_based=1 00:39:50.580 runtime=1 00:39:50.580 ioengine=libaio 00:39:50.580 direct=1 00:39:50.580 bs=4096 00:39:50.580 iodepth=128 00:39:50.580 norandommap=0 00:39:50.580 numjobs=1 00:39:50.580 00:39:50.580 verify_dump=1 00:39:50.580 verify_backlog=512 00:39:50.580 verify_state_save=0 00:39:50.580 do_verify=1 00:39:50.580 verify=crc32c-intel 00:39:50.580 [job0] 00:39:50.580 filename=/dev/nvme0n1 00:39:50.580 [job1] 00:39:50.580 filename=/dev/nvme0n2 00:39:50.580 [job2] 00:39:50.580 filename=/dev/nvme0n3 00:39:50.580 [job3] 00:39:50.580 filename=/dev/nvme0n4 00:39:50.580 Could not set queue depth (nvme0n1) 00:39:50.580 Could not set queue depth (nvme0n2) 00:39:50.580 Could not set queue depth (nvme0n3) 00:39:50.580 Could not set queue depth (nvme0n4) 00:39:50.580 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:50.580 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:50.580 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:50.580 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:50.580 fio-3.35 00:39:50.580 Starting 4 threads 00:39:51.957 00:39:51.957 job0: (groupid=0, jobs=1): err= 0: pid=941607: Mon Nov 18 08:13:44 2024 00:39:51.957 read: IOPS=3007, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1004msec) 00:39:51.957 slat (usec): min=2, max=14582, avg=165.05, stdev=1082.30 00:39:51.957 clat (usec): min=2073, max=62401, avg=21547.36, stdev=10297.21 00:39:51.957 lat (usec): min=5194, max=62419, avg=21712.41, stdev=10405.47 00:39:51.957 clat percentiles (usec): 00:39:51.957 | 1.00th=[ 9110], 5.00th=[10159], 10.00th=[10814], 20.00th=[12780], 00:39:51.957 | 30.00th=[14484], 40.00th=[16909], 50.00th=[20055], 60.00th=[21103], 00:39:51.957 | 70.00th=[25035], 80.00th=[27657], 90.00th=[37487], 95.00th=[40109], 00:39:51.957 | 99.00th=[53740], 99.50th=[61080], 99.90th=[62653], 99.95th=[62653], 00:39:51.957 | 99.99th=[62653] 00:39:51.957 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:39:51.957 slat (usec): min=3, max=26647, avg=144.11, stdev=1104.92 00:39:51.957 clat (usec): min=4437, max=62677, avg=20252.67, stdev=10119.73 00:39:51.957 lat (usec): min=4447, max=62717, avg=20396.78, stdev=10222.69 00:39:51.957 clat percentiles (usec): 00:39:51.957 | 1.00th=[ 6915], 5.00th=[ 9110], 10.00th=[10290], 20.00th=[10814], 00:39:51.957 | 30.00th=[11863], 40.00th=[13435], 50.00th=[19530], 60.00th=[21890], 00:39:51.957 | 70.00th=[25035], 80.00th=[28181], 90.00th=[32900], 95.00th=[41157], 00:39:51.957 | 99.00th=[48497], 99.50th=[56886], 99.90th=[62653], 99.95th=[62653], 00:39:51.957 | 99.99th=[62653] 00:39:51.957 bw ( KiB/s): min= 8264, max=16312, per=23.34%, avg=12288.00, stdev=5690.80, samples=2 00:39:51.957 iops : min= 2066, max= 4078, avg=3072.00, stdev=1422.70, samples=2 00:39:51.957 lat (msec) : 4=0.02%, 10=4.99%, 20=45.55%, 50=48.01%, 100=1.43% 00:39:51.957 cpu : usr=3.29%, sys=5.88%, ctx=158, majf=0, minf=1 00:39:51.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:39:51.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:51.957 issued rwts: total=3020,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:51.957 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:51.957 job1: (groupid=0, jobs=1): err= 0: pid=941608: Mon Nov 18 08:13:44 2024 00:39:51.957 read: IOPS=2390, BW=9563KiB/s (9793kB/s)(9984KiB/1044msec) 00:39:51.957 slat (usec): min=3, max=14586, avg=195.81, stdev=1194.71 00:39:51.957 clat (usec): min=7933, max=85740, avg=25286.23, stdev=14322.42 00:39:51.957 lat (usec): min=7956, max=85744, avg=25482.04, stdev=14441.37 00:39:51.957 clat percentiles (usec): 00:39:51.957 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10552], 20.00th=[13173], 00:39:51.957 | 30.00th=[13698], 40.00th=[20841], 50.00th=[21365], 60.00th=[25035], 00:39:51.957 | 70.00th=[31065], 80.00th=[36439], 90.00th=[40633], 95.00th=[52167], 00:39:51.957 | 99.00th=[77071], 99.50th=[85459], 99.90th=[85459], 99.95th=[85459], 00:39:51.957 | 99.99th=[85459] 00:39:51.957 write: IOPS=2452, BW=9808KiB/s (10.0MB/s)(10.0MiB/1044msec); 0 zone resets 00:39:51.957 slat (usec): min=3, max=16046, avg=188.82, stdev=1124.42 00:39:51.957 clat (msec): min=7, max=106, avg=27.05, stdev=21.58 00:39:51.957 lat (msec): min=7, max=106, avg=27.24, stdev=21.73 00:39:51.957 clat percentiles (msec): 00:39:51.957 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:39:51.957 | 30.00th=[ 15], 40.00th=[ 18], 50.00th=[ 21], 60.00th=[ 22], 00:39:51.957 | 70.00th=[ 23], 80.00th=[ 31], 90.00th=[ 71], 95.00th=[ 81], 00:39:51.957 | 99.00th=[ 92], 99.50th=[ 95], 99.90th=[ 101], 99.95th=[ 108], 00:39:51.957 | 99.99th=[ 108] 00:39:51.957 bw ( KiB/s): min= 8352, max=12128, per=19.45%, avg=10240.00, stdev=2670.04, samples=2 00:39:51.957 iops : min= 2088, max= 3032, avg=2560.00, stdev=667.51, samples=2 00:39:51.957 lat (msec) : 10=2.73%, 20=41.52%, 50=44.88%, 100=10.68%, 250=0.20% 00:39:51.957 cpu : usr=4.03%, sys=5.08%, ctx=188, majf=0, minf=1 00:39:51.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:39:51.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:51.957 issued rwts: total=2496,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:51.957 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:51.957 job2: (groupid=0, jobs=1): err= 0: pid=941609: Mon Nov 18 08:13:44 2024 00:39:51.957 read: IOPS=2717, BW=10.6MiB/s (11.1MB/s)(11.1MiB/1045msec) 00:39:51.957 slat (usec): min=3, max=16078, avg=164.07, stdev=1106.36 00:39:51.957 clat (usec): min=8291, max=68693, avg=22858.85, stdev=11514.18 00:39:51.957 lat (usec): min=8307, max=68699, avg=23022.91, stdev=11580.97 00:39:51.957 clat percentiles (usec): 00:39:51.957 | 1.00th=[ 9503], 5.00th=[11731], 10.00th=[13173], 20.00th=[14091], 00:39:51.957 | 30.00th=[16319], 40.00th=[17695], 50.00th=[20317], 60.00th=[22676], 00:39:51.957 | 70.00th=[24773], 80.00th=[25822], 90.00th=[40109], 95.00th=[52167], 00:39:51.957 | 99.00th=[62129], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:39:51.957 | 99.99th=[68682] 00:39:51.957 write: IOPS=2939, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1045msec); 0 zone resets 00:39:51.957 slat (usec): min=3, max=14455, avg=164.81, stdev=872.77 00:39:51.957 clat (usec): min=6616, max=63687, avg=21965.33, stdev=14457.98 00:39:51.957 lat (usec): min=6624, max=63699, avg=22130.14, stdev=14566.15 00:39:51.957 clat percentiles (usec): 00:39:51.957 | 1.00th=[ 9110], 5.00th=[11994], 10.00th=[12256], 20.00th=[12911], 00:39:51.957 | 30.00th=[13304], 40.00th=[13435], 50.00th=[15270], 60.00th=[17695], 00:39:51.957 | 70.00th=[20579], 80.00th=[25035], 90.00th=[52691], 95.00th=[55837], 00:39:51.957 | 99.00th=[60556], 99.50th=[61080], 99.90th=[63701], 99.95th=[63701], 00:39:51.957 | 99.99th=[63701] 00:39:51.957 bw ( KiB/s): min= 8192, max=16384, per=23.34%, avg=12288.00, stdev=5792.62, samples=2 00:39:51.957 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:39:51.958 lat (msec) : 10=1.71%, 20=56.56%, 50=31.55%, 100=10.18% 00:39:51.958 cpu : usr=4.31%, sys=5.27%, ctx=299, majf=0, minf=2 00:39:51.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:39:51.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:51.958 issued rwts: total=2840,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:51.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:51.958 job3: (groupid=0, jobs=1): err= 0: pid=941610: Mon Nov 18 08:13:44 2024 00:39:51.958 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:39:51.958 slat (usec): min=3, max=8129, avg=91.21, stdev=563.59 00:39:51.958 clat (usec): min=3435, max=47315, avg=12940.63, stdev=4335.92 00:39:51.958 lat (usec): min=3450, max=47324, avg=13031.83, stdev=4350.72 00:39:51.958 clat percentiles (usec): 00:39:51.958 | 1.00th=[ 4490], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[10683], 00:39:51.958 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11731], 60.00th=[12256], 00:39:51.958 | 70.00th=[13566], 80.00th=[15270], 90.00th=[16581], 95.00th=[20841], 00:39:51.958 | 99.00th=[33424], 99.50th=[36963], 99.90th=[36963], 99.95th=[47449], 00:39:51.958 | 99.99th=[47449] 00:39:51.958 write: IOPS=5044, BW=19.7MiB/s (20.7MB/s)(19.7MiB/1001msec); 0 zone resets 00:39:51.958 slat (usec): min=4, max=28292, avg=94.99, stdev=676.82 00:39:51.958 clat (usec): min=545, max=68387, avg=13270.00, stdev=6057.17 00:39:51.958 lat (usec): min=3518, max=68405, avg=13364.99, stdev=6095.31 00:39:51.958 clat percentiles (usec): 00:39:51.958 | 1.00th=[ 4555], 5.00th=[ 8979], 10.00th=[10290], 20.00th=[10814], 00:39:51.958 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[12387], 00:39:51.958 | 70.00th=[12649], 80.00th=[14615], 90.00th=[16909], 95.00th=[17695], 00:39:51.958 | 99.00th=[47973], 99.50th=[47973], 99.90th=[53740], 99.95th=[67634], 00:39:51.958 | 99.99th=[68682] 00:39:51.958 bw ( KiB/s): min=16432, max=16432, per=31.21%, avg=16432.00, stdev= 0.00, samples=1 00:39:51.958 iops : min= 4108, max= 4108, avg=4108.00, stdev= 0.00, samples=1 00:39:51.958 lat (usec) : 750=0.01% 00:39:51.958 lat (msec) : 4=0.26%, 10=7.74%, 20=86.49%, 50=5.37%, 100=0.12% 00:39:51.958 cpu : usr=5.50%, sys=11.90%, ctx=406, majf=0, minf=1 00:39:51.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:39:51.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:51.958 issued rwts: total=4608,5050,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:51.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:51.958 00:39:51.958 Run status group 0 (all jobs): 00:39:51.958 READ: bw=48.5MiB/s (50.8MB/s), 9563KiB/s-18.0MiB/s (9793kB/s-18.9MB/s), io=50.6MiB (53.1MB), run=1001-1045msec 00:39:51.958 WRITE: bw=51.4MiB/s (53.9MB/s), 9808KiB/s-19.7MiB/s (10.0MB/s-20.7MB/s), io=53.7MiB (56.3MB), run=1001-1045msec 00:39:51.958 00:39:51.958 Disk stats (read/write): 00:39:51.958 nvme0n1: ios=2601/3000, merge=0/0, ticks=29464/29179, in_queue=58643, util=98.40% 00:39:51.958 nvme0n2: ios=2067/2477, merge=0/0, ticks=15111/24877, in_queue=39988, util=99.70% 00:39:51.958 nvme0n3: ios=2416/2560, merge=0/0, ticks=24685/26260, in_queue=50945, util=88.85% 00:39:51.958 nvme0n4: ios=3894/4096, merge=0/0, ticks=25648/24573, in_queue=50221, util=98.11% 00:39:51.958 08:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:39:51.958 [global] 00:39:51.958 thread=1 00:39:51.958 invalidate=1 00:39:51.958 rw=randwrite 00:39:51.958 time_based=1 00:39:51.958 runtime=1 00:39:51.958 ioengine=libaio 00:39:51.958 direct=1 00:39:51.958 bs=4096 00:39:51.958 iodepth=128 00:39:51.958 norandommap=0 00:39:51.958 numjobs=1 00:39:51.958 00:39:51.958 verify_dump=1 00:39:51.958 verify_backlog=512 00:39:51.958 verify_state_save=0 00:39:51.958 do_verify=1 00:39:51.958 verify=crc32c-intel 00:39:51.958 [job0] 00:39:51.958 filename=/dev/nvme0n1 00:39:51.958 [job1] 00:39:51.958 filename=/dev/nvme0n2 00:39:51.958 [job2] 00:39:51.958 filename=/dev/nvme0n3 00:39:51.958 [job3] 00:39:51.958 filename=/dev/nvme0n4 00:39:51.958 Could not set queue depth (nvme0n1) 00:39:51.958 Could not set queue depth (nvme0n2) 00:39:51.958 Could not set queue depth (nvme0n3) 00:39:51.958 Could not set queue depth (nvme0n4) 00:39:51.958 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:51.958 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:51.958 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:51.958 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:51.958 fio-3.35 00:39:51.958 Starting 4 threads 00:39:53.334 00:39:53.334 job0: (groupid=0, jobs=1): err= 0: pid=941835: Mon Nov 18 08:13:46 2024 00:39:53.334 read: IOPS=4672, BW=18.3MiB/s (19.1MB/s)(18.3MiB/1005msec) 00:39:53.334 slat (usec): min=3, max=5724, avg=93.13, stdev=594.63 00:39:53.335 clat (usec): min=4058, max=18222, avg=12230.26, stdev=1803.66 00:39:53.335 lat (usec): min=6039, max=18320, avg=12323.39, stdev=1840.03 00:39:53.335 clat percentiles (usec): 00:39:53.335 | 1.00th=[ 8455], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[10945], 00:39:53.335 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11994], 60.00th=[12256], 00:39:53.335 | 70.00th=[12780], 80.00th=[13698], 90.00th=[14746], 95.00th=[15926], 00:39:53.335 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17957], 99.95th=[17957], 00:39:53.335 | 99.99th=[18220] 00:39:53.335 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:39:53.335 slat (usec): min=4, max=17622, avg=101.00, stdev=660.43 00:39:53.335 clat (usec): min=6207, max=55583, avg=13592.09, stdev=6906.04 00:39:53.335 lat (usec): min=6224, max=55594, avg=13693.08, stdev=6959.19 00:39:53.335 clat percentiles (usec): 00:39:53.335 | 1.00th=[ 7701], 5.00th=[10421], 10.00th=[11076], 20.00th=[11469], 00:39:53.335 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:39:53.335 | 70.00th=[12256], 80.00th=[12780], 90.00th=[15270], 95.00th=[24773], 00:39:53.335 | 99.00th=[52691], 99.50th=[52691], 99.90th=[55313], 99.95th=[55837], 00:39:53.335 | 99.99th=[55837] 00:39:53.335 bw ( KiB/s): min=20168, max=20480, per=27.67%, avg=20324.00, stdev=220.62, samples=2 00:39:53.335 iops : min= 5042, max= 5120, avg=5081.00, stdev=55.15, samples=2 00:39:53.335 lat (msec) : 10=5.37%, 20=90.68%, 50=2.98%, 100=0.97% 00:39:53.335 cpu : usr=5.48%, sys=9.36%, ctx=335, majf=0, minf=1 00:39:53.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:39:53.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:53.335 issued rwts: total=4696,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.335 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:53.335 job1: (groupid=0, jobs=1): err= 0: pid=941836: Mon Nov 18 08:13:46 2024 00:39:53.335 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:39:53.335 slat (usec): min=2, max=12345, avg=107.56, stdev=765.32 00:39:53.335 clat (usec): min=3592, max=27781, avg=13959.67, stdev=3302.73 00:39:53.335 lat (usec): min=3630, max=27787, avg=14067.23, stdev=3370.13 00:39:53.335 clat percentiles (usec): 00:39:53.335 | 1.00th=[ 8979], 5.00th=[10814], 10.00th=[11076], 20.00th=[11994], 00:39:53.335 | 30.00th=[12387], 40.00th=[12649], 50.00th=[13042], 60.00th=[13435], 00:39:53.335 | 70.00th=[13960], 80.00th=[15139], 90.00th=[19006], 95.00th=[21890], 00:39:53.335 | 99.00th=[25035], 99.50th=[25822], 99.90th=[26608], 99.95th=[27657], 00:39:53.335 | 99.99th=[27657] 00:39:53.335 write: IOPS=4614, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1003msec); 0 zone resets 00:39:53.335 slat (usec): min=4, max=11112, avg=93.65, stdev=562.49 00:39:53.335 clat (usec): min=1059, max=37426, avg=13589.29, stdev=3605.30 00:39:53.335 lat (usec): min=1070, max=37432, avg=13682.95, stdev=3649.32 00:39:53.335 clat percentiles (usec): 00:39:53.335 | 1.00th=[ 4359], 5.00th=[ 8586], 10.00th=[10159], 20.00th=[12256], 00:39:53.335 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:39:53.335 | 70.00th=[14484], 80.00th=[15008], 90.00th=[15533], 95.00th=[17171], 00:39:53.335 | 99.00th=[27919], 99.50th=[28967], 99.90th=[32900], 99.95th=[33162], 00:39:53.335 | 99.99th=[37487] 00:39:53.335 bw ( KiB/s): min=17232, max=19671, per=25.12%, avg=18451.50, stdev=1724.63, samples=2 00:39:53.335 iops : min= 4308, max= 4917, avg=4612.50, stdev=430.63, samples=2 00:39:53.335 lat (msec) : 2=0.04%, 4=0.55%, 10=5.75%, 20=88.25%, 50=5.40% 00:39:53.335 cpu : usr=5.49%, sys=8.68%, ctx=472, majf=0, minf=2 00:39:53.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:39:53.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:53.335 issued rwts: total=4608,4628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.335 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:53.335 job2: (groupid=0, jobs=1): err= 0: pid=941837: Mon Nov 18 08:13:46 2024 00:39:53.335 read: IOPS=4048, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1005msec) 00:39:53.335 slat (usec): min=2, max=11014, avg=122.14, stdev=744.18 00:39:53.335 clat (usec): min=979, max=31043, avg=15504.75, stdev=2897.15 00:39:53.335 lat (usec): min=4972, max=31068, avg=15626.89, stdev=2931.38 00:39:53.335 clat percentiles (usec): 00:39:53.335 | 1.00th=[ 5932], 5.00th=[11338], 10.00th=[12518], 20.00th=[13698], 00:39:53.335 | 30.00th=[14091], 40.00th=[14615], 50.00th=[15139], 60.00th=[15664], 00:39:53.335 | 70.00th=[16450], 80.00th=[17695], 90.00th=[19530], 95.00th=[20579], 00:39:53.335 | 99.00th=[22676], 99.50th=[22938], 99.90th=[24511], 99.95th=[25822], 00:39:53.335 | 99.99th=[31065] 00:39:53.335 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:39:53.335 slat (usec): min=4, max=9391, avg=114.66, stdev=643.76 00:39:53.335 clat (usec): min=7888, max=25660, avg=15632.08, stdev=1875.26 00:39:53.335 lat (usec): min=7895, max=25670, avg=15746.74, stdev=1923.75 00:39:53.335 clat percentiles (usec): 00:39:53.335 | 1.00th=[ 9372], 5.00th=[13304], 10.00th=[14222], 20.00th=[14615], 00:39:53.335 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15270], 60.00th=[15664], 00:39:53.335 | 70.00th=[16188], 80.00th=[16909], 90.00th=[17433], 95.00th=[18220], 00:39:53.335 | 99.00th=[22938], 99.50th=[23200], 99.90th=[23987], 99.95th=[24249], 00:39:53.335 | 99.99th=[25560] 00:39:53.335 bw ( KiB/s): min=16048, max=16720, per=22.31%, avg=16384.00, stdev=475.18, samples=2 00:39:53.335 iops : min= 4012, max= 4180, avg=4096.00, stdev=118.79, samples=2 00:39:53.335 lat (usec) : 1000=0.01% 00:39:53.335 lat (msec) : 10=1.52%, 20=92.86%, 50=5.61% 00:39:53.335 cpu : usr=4.18%, sys=8.07%, ctx=390, majf=0, minf=1 00:39:53.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:53.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:53.335 issued rwts: total=4069,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.335 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:53.335 job3: (groupid=0, jobs=1): err= 0: pid=941838: Mon Nov 18 08:13:46 2024 00:39:53.335 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:39:53.335 slat (usec): min=2, max=8611, avg=107.45, stdev=727.71 00:39:53.335 clat (usec): min=2553, max=25225, avg=13863.73, stdev=2744.95 00:39:53.335 lat (usec): min=2559, max=25251, avg=13971.19, stdev=2797.52 00:39:53.335 clat percentiles (usec): 00:39:53.335 | 1.00th=[ 4752], 5.00th=[10159], 10.00th=[11207], 20.00th=[12387], 00:39:53.335 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13698], 60.00th=[13829], 00:39:53.335 | 70.00th=[14222], 80.00th=[15533], 90.00th=[17957], 95.00th=[19006], 00:39:53.335 | 99.00th=[20317], 99.50th=[21365], 99.90th=[21627], 99.95th=[21627], 00:39:53.335 | 99.99th=[25297] 00:39:53.335 write: IOPS=4642, BW=18.1MiB/s (19.0MB/s)(18.3MiB/1009msec); 0 zone resets 00:39:53.335 slat (usec): min=3, max=13121, avg=98.24, stdev=646.05 00:39:53.335 clat (usec): min=1600, max=27123, avg=13608.16, stdev=2668.94 00:39:53.335 lat (usec): min=1616, max=27138, avg=13706.40, stdev=2722.09 00:39:53.335 clat percentiles (usec): 00:39:53.335 | 1.00th=[ 3228], 5.00th=[ 8586], 10.00th=[11207], 20.00th=[12780], 00:39:53.335 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13829], 60.00th=[13960], 00:39:53.335 | 70.00th=[14222], 80.00th=[14484], 90.00th=[15008], 95.00th=[18482], 00:39:53.335 | 99.00th=[20841], 99.50th=[21103], 99.90th=[21627], 99.95th=[21627], 00:39:53.335 | 99.99th=[27132] 00:39:53.335 bw ( KiB/s): min=18176, max=18688, per=25.09%, avg=18432.00, stdev=362.04, samples=2 00:39:53.335 iops : min= 4544, max= 4672, avg=4608.00, stdev=90.51, samples=2 00:39:53.335 lat (msec) : 2=0.11%, 4=1.10%, 10=4.97%, 20=91.77%, 50=2.06% 00:39:53.335 cpu : usr=3.67%, sys=4.96%, ctx=374, majf=0, minf=1 00:39:53.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:39:53.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:53.335 issued rwts: total=4608,4684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.335 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:53.335 00:39:53.335 Run status group 0 (all jobs): 00:39:53.335 READ: bw=69.6MiB/s (73.0MB/s), 15.8MiB/s-18.3MiB/s (16.6MB/s-19.1MB/s), io=70.2MiB (73.7MB), run=1003-1009msec 00:39:53.335 WRITE: bw=71.7MiB/s (75.2MB/s), 15.9MiB/s-19.9MiB/s (16.7MB/s-20.9MB/s), io=72.4MiB (75.9MB), run=1003-1009msec 00:39:53.335 00:39:53.335 Disk stats (read/write): 00:39:53.335 nvme0n1: ios=3666/4096, merge=0/0, ticks=22567/26931, in_queue=49498, util=99.80% 00:39:53.335 nvme0n2: ios=3634/3847, merge=0/0, ticks=41069/41072, in_queue=82141, util=98.25% 00:39:53.335 nvme0n3: ios=3072/3543, merge=0/0, ticks=19434/21720, in_queue=41154, util=87.68% 00:39:53.335 nvme0n4: ios=3584/3959, merge=0/0, ticks=25681/27850, in_queue=53531, util=89.12% 00:39:53.335 08:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:39:53.335 08:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=941976 00:39:53.335 08:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:39:53.335 08:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:39:53.335 [global] 00:39:53.335 thread=1 00:39:53.335 invalidate=1 00:39:53.335 rw=read 00:39:53.335 time_based=1 00:39:53.335 runtime=10 00:39:53.335 ioengine=libaio 00:39:53.335 direct=1 00:39:53.335 bs=4096 00:39:53.335 iodepth=1 00:39:53.335 norandommap=1 00:39:53.335 numjobs=1 00:39:53.335 00:39:53.335 [job0] 00:39:53.335 filename=/dev/nvme0n1 00:39:53.335 [job1] 00:39:53.335 filename=/dev/nvme0n2 00:39:53.335 [job2] 00:39:53.335 filename=/dev/nvme0n3 00:39:53.335 [job3] 00:39:53.336 filename=/dev/nvme0n4 00:39:53.336 Could not set queue depth (nvme0n1) 00:39:53.336 Could not set queue depth (nvme0n2) 00:39:53.336 Could not set queue depth (nvme0n3) 00:39:53.336 Could not set queue depth (nvme0n4) 00:39:53.595 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:53.595 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:53.595 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:53.595 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:53.595 fio-3.35 00:39:53.595 Starting 4 threads 00:39:56.887 08:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:39:56.887 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=37453824, buflen=4096 00:39:56.887 fio: pid=942069, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:56.887 08:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:39:56.887 08:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:56.887 08:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:39:56.887 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=10653696, buflen=4096 00:39:56.887 fio: pid=942068, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:57.145 08:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:57.145 08:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:39:57.145 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=27193344, buflen=4096 00:39:57.145 fio: pid=942066, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:57.403 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=20586496, buflen=4096 00:39:57.403 fio: pid=942067, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:57.403 08:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:57.403 08:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:39:57.403 00:39:57.403 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=942066: Mon Nov 18 08:13:50 2024 00:39:57.403 read: IOPS=1895, BW=7581KiB/s (7763kB/s)(25.9MiB/3503msec) 00:39:57.403 slat (usec): min=4, max=15301, avg=15.91, stdev=266.82 00:39:57.403 clat (usec): min=172, max=42021, avg=505.37, stdev=3113.28 00:39:57.404 lat (usec): min=179, max=45996, avg=521.29, stdev=3135.24 00:39:57.404 clat percentiles (usec): 00:39:57.404 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 221], 20.00th=[ 231], 00:39:57.404 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 260], 00:39:57.404 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 306], 95.00th=[ 445], 00:39:57.404 | 99.00th=[ 570], 99.50th=[40633], 99.90th=[41157], 99.95th=[41681], 00:39:57.404 | 99.99th=[42206] 00:39:57.404 bw ( KiB/s): min= 96, max=15016, per=33.27%, avg=8231.83, stdev=5923.66, samples=6 00:39:57.404 iops : min= 24, max= 3754, avg=2057.50, stdev=1480.79, samples=6 00:39:57.404 lat (usec) : 250=44.86%, 500=52.14%, 750=2.33%, 1000=0.05% 00:39:57.404 lat (msec) : 2=0.02%, 50=0.59% 00:39:57.404 cpu : usr=1.57%, sys=2.88%, ctx=6643, majf=0, minf=2 00:39:57.404 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:57.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:57.404 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:57.404 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:57.404 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:57.404 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=942067: Mon Nov 18 08:13:50 2024 00:39:57.404 read: IOPS=1328, BW=5311KiB/s (5439kB/s)(19.6MiB/3785msec) 00:39:57.404 slat (usec): min=4, max=13727, avg=23.65, stdev=384.56 00:39:57.404 clat (usec): min=188, max=41221, avg=722.14, stdev=4270.17 00:39:57.404 lat (usec): min=193, max=41227, avg=745.80, stdev=4286.93 00:39:57.404 clat percentiles (usec): 00:39:57.404 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 221], 00:39:57.404 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 237], 60.00th=[ 245], 00:39:57.404 | 70.00th=[ 269], 80.00th=[ 297], 90.00th=[ 388], 95.00th=[ 465], 00:39:57.404 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:57.404 | 99.99th=[41157] 00:39:57.404 bw ( KiB/s): min= 104, max=11016, per=19.93%, avg=4931.71, stdev=4143.26, samples=7 00:39:57.404 iops : min= 26, max= 2754, avg=1232.86, stdev=1035.86, samples=7 00:39:57.404 lat (usec) : 250=62.66%, 500=33.64%, 750=2.27%, 1000=0.16% 00:39:57.404 lat (msec) : 2=0.10%, 4=0.02%, 10=0.02%, 50=1.11% 00:39:57.404 cpu : usr=0.69%, sys=1.93%, ctx=5036, majf=0, minf=1 00:39:57.404 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:57.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:57.404 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:57.404 issued rwts: total=5027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:57.404 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:57.404 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=942068: Mon Nov 18 08:13:50 2024 00:39:57.404 read: IOPS=806, BW=3224KiB/s (3301kB/s)(10.2MiB/3227msec) 00:39:57.404 slat (nsec): min=5823, max=84704, avg=20204.93, stdev=10120.36 00:39:57.404 clat (usec): min=219, max=41300, avg=1206.51, stdev=5733.09 00:39:57.404 lat (usec): min=227, max=41319, avg=1226.71, stdev=5733.43 00:39:57.404 clat percentiles (usec): 00:39:57.404 | 1.00th=[ 237], 5.00th=[ 269], 10.00th=[ 297], 20.00th=[ 318], 00:39:57.404 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 359], 60.00th=[ 383], 00:39:57.404 | 70.00th=[ 412], 80.00th=[ 453], 90.00th=[ 537], 95.00th=[ 553], 00:39:57.404 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:57.404 | 99.99th=[41157] 00:39:57.404 bw ( KiB/s): min= 112, max= 8016, per=13.99%, avg=3460.33, stdev=3088.43, samples=6 00:39:57.404 iops : min= 28, max= 2004, avg=865.00, stdev=772.13, samples=6 00:39:57.404 lat (usec) : 250=2.69%, 500=83.51%, 750=11.57%, 1000=0.15% 00:39:57.404 lat (msec) : 50=2.04% 00:39:57.404 cpu : usr=0.90%, sys=2.23%, ctx=2604, majf=0, minf=2 00:39:57.404 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:57.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:57.404 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:57.404 issued rwts: total=2602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:57.404 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:57.404 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=942069: Mon Nov 18 08:13:50 2024 00:39:57.404 read: IOPS=3139, BW=12.3MiB/s (12.9MB/s)(35.7MiB/2913msec) 00:39:57.404 slat (nsec): min=5373, max=68500, avg=12989.69, stdev=6120.61 00:39:57.404 clat (usec): min=178, max=40739, avg=299.53, stdev=430.27 00:39:57.404 lat (usec): min=185, max=40756, avg=312.53, stdev=430.66 00:39:57.404 clat percentiles (usec): 00:39:57.404 | 1.00th=[ 225], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 251], 00:39:57.404 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 285], 00:39:57.404 | 70.00th=[ 297], 80.00th=[ 326], 90.00th=[ 371], 95.00th=[ 441], 00:39:57.404 | 99.00th=[ 553], 99.50th=[ 578], 99.90th=[ 635], 99.95th=[ 668], 00:39:57.404 | 99.99th=[40633] 00:39:57.404 bw ( KiB/s): min=11025, max=14056, per=50.11%, avg=12397.00, stdev=1360.05, samples=5 00:39:57.404 iops : min= 2756, max= 3514, avg=3099.20, stdev=340.08, samples=5 00:39:57.404 lat (usec) : 250=18.51%, 500=78.57%, 750=2.88% 00:39:57.404 lat (msec) : 4=0.02%, 50=0.01% 00:39:57.404 cpu : usr=2.68%, sys=6.46%, ctx=9146, majf=0, minf=2 00:39:57.404 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:57.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:57.404 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:57.404 issued rwts: total=9145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:57.404 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:57.404 00:39:57.404 Run status group 0 (all jobs): 00:39:57.404 READ: bw=24.2MiB/s (25.3MB/s), 3224KiB/s-12.3MiB/s (3301kB/s-12.9MB/s), io=91.4MiB (95.9MB), run=2913-3785msec 00:39:57.404 00:39:57.404 Disk stats (read/write): 00:39:57.404 nvme0n1: ios=6527/0, merge=0/0, ticks=3180/0, in_queue=3180, util=95.51% 00:39:57.404 nvme0n2: ios=4520/0, merge=0/0, ticks=3460/0, in_queue=3460, util=95.58% 00:39:57.404 nvme0n3: ios=2626/0, merge=0/0, ticks=3148/0, in_queue=3148, util=99.16% 00:39:57.404 nvme0n4: ios=9013/0, merge=0/0, ticks=2579/0, in_queue=2579, util=96.75% 00:39:57.663 08:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:57.663 08:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:39:57.921 08:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:57.921 08:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:39:58.179 08:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:58.179 08:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:39:58.750 08:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:58.750 08:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:39:58.750 08:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:39:58.750 08:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 941976 00:39:58.750 08:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:39:58.750 08:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:59.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:59.008 08:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:59.009 08:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:39:59.009 08:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:59.009 08:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:59.009 08:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:59.009 08:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:59.009 08:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:39:59.009 08:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:39:59.009 08:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:39:59.009 nvmf hotplug test: fio failed as expected 00:39:59.009 08:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:59.268 rmmod nvme_tcp 00:39:59.268 rmmod nvme_fabrics 00:39:59.268 rmmod nvme_keyring 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 940072 ']' 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 940072 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 940072 ']' 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 940072 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 940072 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 940072' 00:39:59.268 killing process with pid 940072 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 940072 00:39:59.268 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 940072 00:39:59.526 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:59.526 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:59.526 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:59.526 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:39:59.526 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:39:59.526 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:59.526 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:39:59.526 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:59.526 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:59.526 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:59.527 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:59.527 08:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:02.066 00:40:02.066 real 0m23.892s 00:40:02.066 user 1m7.686s 00:40:02.066 sys 0m10.539s 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:02.066 ************************************ 00:40:02.066 END TEST nvmf_fio_target 00:40:02.066 ************************************ 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:02.066 ************************************ 00:40:02.066 START TEST nvmf_bdevio 00:40:02.066 ************************************ 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:02.066 * Looking for test storage... 00:40:02.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:02.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.066 --rc genhtml_branch_coverage=1 00:40:02.066 --rc genhtml_function_coverage=1 00:40:02.066 --rc genhtml_legend=1 00:40:02.066 --rc geninfo_all_blocks=1 00:40:02.066 --rc geninfo_unexecuted_blocks=1 00:40:02.066 00:40:02.066 ' 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:02.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.066 --rc genhtml_branch_coverage=1 00:40:02.066 --rc genhtml_function_coverage=1 00:40:02.066 --rc genhtml_legend=1 00:40:02.066 --rc geninfo_all_blocks=1 00:40:02.066 --rc geninfo_unexecuted_blocks=1 00:40:02.066 00:40:02.066 ' 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:02.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.066 --rc genhtml_branch_coverage=1 00:40:02.066 --rc genhtml_function_coverage=1 00:40:02.066 --rc genhtml_legend=1 00:40:02.066 --rc geninfo_all_blocks=1 00:40:02.066 --rc geninfo_unexecuted_blocks=1 00:40:02.066 00:40:02.066 ' 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:02.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.066 --rc genhtml_branch_coverage=1 00:40:02.066 --rc genhtml_function_coverage=1 00:40:02.066 --rc genhtml_legend=1 00:40:02.066 --rc geninfo_all_blocks=1 00:40:02.066 --rc geninfo_unexecuted_blocks=1 00:40:02.066 00:40:02.066 ' 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:02.066 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:40:02.067 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:03.975 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:03.976 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:03.976 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:03.976 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:03.976 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:03.976 08:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:03.976 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:03.976 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:03.976 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:03.976 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:04.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:04.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:40:04.235 00:40:04.235 --- 10.0.0.2 ping statistics --- 00:40:04.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:04.235 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:04.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:04.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:40:04.235 00:40:04.235 --- 10.0.0.1 ping statistics --- 00:40:04.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:04.235 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=944807 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 944807 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 944807 ']' 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:04.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:04.235 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:04.235 [2024-11-18 08:13:57.189354] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:04.235 [2024-11-18 08:13:57.190433] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:40:04.235 [2024-11-18 08:13:57.190505] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:04.235 [2024-11-18 08:13:57.261384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:04.235 [2024-11-18 08:13:57.307789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:04.235 [2024-11-18 08:13:57.307840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:04.235 [2024-11-18 08:13:57.307870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:04.235 [2024-11-18 08:13:57.307881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:04.235 [2024-11-18 08:13:57.307891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:04.235 [2024-11-18 08:13:57.309454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:04.235 [2024-11-18 08:13:57.309518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:04.235 [2024-11-18 08:13:57.309586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:04.235 [2024-11-18 08:13:57.309590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:04.493 [2024-11-18 08:13:57.394296] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:04.493 [2024-11-18 08:13:57.394543] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:04.493 [2024-11-18 08:13:57.394808] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:04.493 [2024-11-18 08:13:57.395323] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:04.493 [2024-11-18 08:13:57.395590] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:04.493 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:04.493 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:40:04.493 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:04.494 [2024-11-18 08:13:57.450263] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:04.494 Malloc0 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:04.494 [2024-11-18 08:13:57.514507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:04.494 { 00:40:04.494 "params": { 00:40:04.494 "name": "Nvme$subsystem", 00:40:04.494 "trtype": "$TEST_TRANSPORT", 00:40:04.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:04.494 "adrfam": "ipv4", 00:40:04.494 "trsvcid": "$NVMF_PORT", 00:40:04.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:04.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:04.494 "hdgst": ${hdgst:-false}, 00:40:04.494 "ddgst": ${ddgst:-false} 00:40:04.494 }, 00:40:04.494 "method": "bdev_nvme_attach_controller" 00:40:04.494 } 00:40:04.494 EOF 00:40:04.494 )") 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:40:04.494 08:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:04.494 "params": { 00:40:04.494 "name": "Nvme1", 00:40:04.494 "trtype": "tcp", 00:40:04.494 "traddr": "10.0.0.2", 00:40:04.494 "adrfam": "ipv4", 00:40:04.494 "trsvcid": "4420", 00:40:04.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:04.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:04.494 "hdgst": false, 00:40:04.494 "ddgst": false 00:40:04.494 }, 00:40:04.494 "method": "bdev_nvme_attach_controller" 00:40:04.494 }' 00:40:04.494 [2024-11-18 08:13:57.565444] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:40:04.494 [2024-11-18 08:13:57.565553] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid944837 ] 00:40:04.753 [2024-11-18 08:13:57.636644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:04.753 [2024-11-18 08:13:57.689292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:04.753 [2024-11-18 08:13:57.689347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:04.753 [2024-11-18 08:13:57.689351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:05.013 I/O targets: 00:40:05.013 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:40:05.013 00:40:05.013 00:40:05.013 CUnit - A unit testing framework for C - Version 2.1-3 00:40:05.013 http://cunit.sourceforge.net/ 00:40:05.013 00:40:05.013 00:40:05.013 Suite: bdevio tests on: Nvme1n1 00:40:05.013 Test: blockdev write read block ...passed 00:40:05.013 Test: blockdev write zeroes read block ...passed 00:40:05.013 Test: blockdev write zeroes read no split ...passed 00:40:05.013 Test: blockdev write zeroes read split ...passed 00:40:05.013 Test: blockdev write zeroes read split partial ...passed 00:40:05.013 Test: blockdev reset ...[2024-11-18 08:13:57.976391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:40:05.013 [2024-11-18 08:13:57.976503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15adac0 (9): Bad file descriptor 00:40:05.013 [2024-11-18 08:13:57.980656] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:40:05.013 passed 00:40:05.013 Test: blockdev write read 8 blocks ...passed 00:40:05.013 Test: blockdev write read size > 128k ...passed 00:40:05.013 Test: blockdev write read invalid size ...passed 00:40:05.013 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:05.013 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:05.013 Test: blockdev write read max offset ...passed 00:40:05.273 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:05.273 Test: blockdev writev readv 8 blocks ...passed 00:40:05.273 Test: blockdev writev readv 30 x 1block ...passed 00:40:05.273 Test: blockdev writev readv block ...passed 00:40:05.273 Test: blockdev writev readv size > 128k ...passed 00:40:05.273 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:05.273 Test: blockdev comparev and writev ...[2024-11-18 08:13:58.272697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:05.273 [2024-11-18 08:13:58.272734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:05.273 [2024-11-18 08:13:58.272759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:05.273 [2024-11-18 08:13:58.272777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:05.273 [2024-11-18 08:13:58.273203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:05.273 [2024-11-18 08:13:58.273228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:05.273 [2024-11-18 08:13:58.273251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:05.273 [2024-11-18 08:13:58.273268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:05.273 [2024-11-18 08:13:58.273685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:05.273 [2024-11-18 08:13:58.273709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:05.273 [2024-11-18 08:13:58.273732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:05.273 [2024-11-18 08:13:58.273749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:05.273 [2024-11-18 08:13:58.274149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:05.273 [2024-11-18 08:13:58.274173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:05.273 [2024-11-18 08:13:58.274195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:05.273 [2024-11-18 08:13:58.274212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:05.273 passed 00:40:05.273 Test: blockdev nvme passthru rw ...passed 00:40:05.273 Test: blockdev nvme passthru vendor specific ...[2024-11-18 08:13:58.355780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:05.273 [2024-11-18 08:13:58.355808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:05.273 [2024-11-18 08:13:58.355961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:05.273 [2024-11-18 08:13:58.355985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:05.273 [2024-11-18 08:13:58.356138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:05.273 [2024-11-18 08:13:58.356161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:05.273 [2024-11-18 08:13:58.356312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:05.273 [2024-11-18 08:13:58.356348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:05.273 passed 00:40:05.532 Test: blockdev nvme admin passthru ...passed 00:40:05.532 Test: blockdev copy ...passed 00:40:05.532 00:40:05.532 Run Summary: Type Total Ran Passed Failed Inactive 00:40:05.532 suites 1 1 n/a 0 0 00:40:05.532 tests 23 23 23 0 0 00:40:05.532 asserts 152 152 152 0 n/a 00:40:05.532 00:40:05.532 Elapsed time = 1.082 seconds 00:40:05.532 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:05.532 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:05.532 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:05.532 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:05.532 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:40:05.532 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:40:05.532 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:05.532 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:40:05.532 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:05.532 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:40:05.532 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:05.532 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:05.532 rmmod nvme_tcp 00:40:05.532 rmmod nvme_fabrics 00:40:05.792 rmmod nvme_keyring 00:40:05.792 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:05.792 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:40:05.792 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:40:05.792 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 944807 ']' 00:40:05.792 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 944807 00:40:05.792 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 944807 ']' 00:40:05.792 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 944807 00:40:05.792 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:40:05.792 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:05.792 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 944807 00:40:05.792 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:40:05.792 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:40:05.792 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 944807' 00:40:05.792 killing process with pid 944807 00:40:05.792 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 944807 00:40:05.792 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 944807 00:40:06.052 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:06.052 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:06.052 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:06.052 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:40:06.052 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:40:06.052 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:06.052 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:40:06.052 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:06.052 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:06.052 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:06.052 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:06.052 08:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:07.975 08:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:07.975 00:40:07.975 real 0m6.362s 00:40:07.975 user 0m7.849s 00:40:07.975 sys 0m2.469s 00:40:07.975 08:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:07.975 08:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:07.975 ************************************ 00:40:07.975 END TEST nvmf_bdevio 00:40:07.975 ************************************ 00:40:07.975 08:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:40:07.975 00:40:07.975 real 3m54.926s 00:40:07.975 user 8m52.222s 00:40:07.975 sys 1m24.950s 00:40:07.975 08:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:07.975 08:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:07.975 ************************************ 00:40:07.975 END TEST nvmf_target_core_interrupt_mode 00:40:07.975 ************************************ 00:40:07.975 08:14:01 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:07.975 08:14:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:07.975 08:14:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:07.975 08:14:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:07.975 ************************************ 00:40:07.975 START TEST nvmf_interrupt 00:40:07.975 ************************************ 00:40:07.975 08:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:08.234 * Looking for test storage... 00:40:08.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:40:08.234 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:08.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:08.235 --rc genhtml_branch_coverage=1 00:40:08.235 --rc genhtml_function_coverage=1 00:40:08.235 --rc genhtml_legend=1 00:40:08.235 --rc geninfo_all_blocks=1 00:40:08.235 --rc geninfo_unexecuted_blocks=1 00:40:08.235 00:40:08.235 ' 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:08.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:08.235 --rc genhtml_branch_coverage=1 00:40:08.235 --rc genhtml_function_coverage=1 00:40:08.235 --rc genhtml_legend=1 00:40:08.235 --rc geninfo_all_blocks=1 00:40:08.235 --rc geninfo_unexecuted_blocks=1 00:40:08.235 00:40:08.235 ' 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:08.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:08.235 --rc genhtml_branch_coverage=1 00:40:08.235 --rc genhtml_function_coverage=1 00:40:08.235 --rc genhtml_legend=1 00:40:08.235 --rc geninfo_all_blocks=1 00:40:08.235 --rc geninfo_unexecuted_blocks=1 00:40:08.235 00:40:08.235 ' 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:08.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:08.235 --rc genhtml_branch_coverage=1 00:40:08.235 --rc genhtml_function_coverage=1 00:40:08.235 --rc genhtml_legend=1 00:40:08.235 --rc geninfo_all_blocks=1 00:40:08.235 --rc geninfo_unexecuted_blocks=1 00:40:08.235 00:40:08.235 ' 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:40:08.235 08:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:10.767 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:10.767 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:40:10.767 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:10.768 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:10.768 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:10.768 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:10.768 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:10.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:10.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:40:10.768 00:40:10.768 --- 10.0.0.2 ping statistics --- 00:40:10.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:10.768 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:10.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:10.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:40:10.768 00:40:10.768 --- 10.0.0.1 ping statistics --- 00:40:10.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:10.768 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:10.768 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=946924 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 946924 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 946924 ']' 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:10.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:10.769 [2024-11-18 08:14:03.545761] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:10.769 [2024-11-18 08:14:03.546897] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:40:10.769 [2024-11-18 08:14:03.546967] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:10.769 [2024-11-18 08:14:03.618685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:10.769 [2024-11-18 08:14:03.663878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:10.769 [2024-11-18 08:14:03.663934] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:10.769 [2024-11-18 08:14:03.663963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:10.769 [2024-11-18 08:14:03.663974] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:10.769 [2024-11-18 08:14:03.663984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:10.769 [2024-11-18 08:14:03.665366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:10.769 [2024-11-18 08:14:03.665372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:10.769 [2024-11-18 08:14:03.748122] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:10.769 [2024-11-18 08:14:03.748169] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:10.769 [2024-11-18 08:14:03.748393] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:40:10.769 5000+0 records in 00:40:10.769 5000+0 records out 00:40:10.769 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0117519 s, 871 MB/s 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.769 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:11.027 AIO0 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:11.027 [2024-11-18 08:14:03.862022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:11.027 [2024-11-18 08:14:03.886232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 946924 0 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 946924 0 idle 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=946924 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 946924 -w 256 00:40:11.027 08:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 946924 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.25 reactor_0' 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 946924 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.25 reactor_0 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 946924 1 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 946924 1 idle 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=946924 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 946924 -w 256 00:40:11.027 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 946928 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.00 reactor_1' 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 946928 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.00 reactor_1 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=947081 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 946924 0 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 946924 0 busy 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=946924 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 946924 -w 256 00:40:11.286 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 946924 root 20 0 128.2g 48384 34560 R 33.3 0.1 0:00.30 reactor_0' 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 946924 root 20 0 128.2g 48384 34560 R 33.3 0.1 0:00.30 reactor_0 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=33.3 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=33 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 946924 1 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 946924 1 busy 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=946924 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 946924 -w 256 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 946928 root 20 0 128.2g 48384 34560 R 99.9 0.1 0:00.18 reactor_1' 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 946928 root 20 0 128.2g 48384 34560 R 99.9 0.1 0:00.18 reactor_1 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:11.546 08:14:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 947081 00:40:21.518 Initializing NVMe Controllers 00:40:21.518 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:21.518 Controller IO queue size 256, less than required. 00:40:21.518 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:21.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:21.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:21.518 Initialization complete. Launching workers. 00:40:21.518 ======================================================== 00:40:21.518 Latency(us) 00:40:21.518 Device Information : IOPS MiB/s Average min max 00:40:21.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 12911.90 50.44 19840.88 4027.93 24190.60 00:40:21.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13457.10 52.57 19038.02 3976.65 24066.58 00:40:21.519 ======================================================== 00:40:21.519 Total : 26369.00 103.00 19431.15 3976.65 24190.60 00:40:21.519 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 946924 0 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 946924 0 idle 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=946924 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 946924 -w 256 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 946924 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:19.74 reactor_0' 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 946924 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:19.74 reactor_0 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 946924 1 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 946924 1 idle 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=946924 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 946924 -w 256 00:40:21.519 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:21.779 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 946928 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:09.51 reactor_1' 00:40:21.779 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 946928 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:09.51 reactor_1 00:40:21.779 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:21.779 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:21.779 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:21.779 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:21.779 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:21.779 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:21.779 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:21.779 08:14:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:21.779 08:14:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:22.037 08:14:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:40:22.037 08:14:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:40:22.037 08:14:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:22.037 08:14:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:22.037 08:14:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:40:24.574 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:24.574 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:24.574 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:24.574 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:24.574 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:24.574 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:40:24.574 08:14:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:24.574 08:14:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 946924 0 00:40:24.574 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 946924 0 idle 00:40:24.574 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=946924 00:40:24.574 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:24.574 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:24.574 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:24.574 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:24.574 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 946924 -w 256 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 946924 root 20 0 128.2g 60672 34560 S 0.0 0.1 0:19.84 reactor_0' 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 946924 root 20 0 128.2g 60672 34560 S 0.0 0.1 0:19.84 reactor_0 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 946924 1 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 946924 1 idle 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=946924 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 946924 -w 256 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 946928 root 20 0 128.2g 60672 34560 S 0.0 0.1 0:09.54 reactor_1' 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 946928 root 20 0 128.2g 60672 34560 S 0.0 0.1 0:09.54 reactor_1 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:24.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:24.575 rmmod nvme_tcp 00:40:24.575 rmmod nvme_fabrics 00:40:24.575 rmmod nvme_keyring 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 946924 ']' 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 946924 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 946924 ']' 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 946924 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 946924 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 946924' 00:40:24.575 killing process with pid 946924 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 946924 00:40:24.575 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 946924 00:40:24.833 08:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:24.833 08:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:24.833 08:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:24.833 08:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:40:24.833 08:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:40:24.833 08:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:24.833 08:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:40:24.833 08:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:24.833 08:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:24.833 08:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:24.833 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:24.833 08:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:27.374 08:14:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:27.374 00:40:27.374 real 0m18.804s 00:40:27.374 user 0m36.760s 00:40:27.374 sys 0m6.666s 00:40:27.374 08:14:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:27.374 08:14:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:27.374 ************************************ 00:40:27.374 END TEST nvmf_interrupt 00:40:27.374 ************************************ 00:40:27.374 00:40:27.374 real 32m59.421s 00:40:27.374 user 87m28.218s 00:40:27.374 sys 8m3.479s 00:40:27.374 08:14:19 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:27.374 08:14:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:27.374 ************************************ 00:40:27.374 END TEST nvmf_tcp 00:40:27.374 ************************************ 00:40:27.374 08:14:19 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:40:27.374 08:14:19 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:27.374 08:14:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:27.374 08:14:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:27.374 08:14:19 -- common/autotest_common.sh@10 -- # set +x 00:40:27.374 ************************************ 00:40:27.374 START TEST spdkcli_nvmf_tcp 00:40:27.374 ************************************ 00:40:27.374 08:14:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:27.374 * Looking for test storage... 00:40:27.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:40:27.374 08:14:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:27.374 08:14:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:40:27.374 08:14:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:27.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.374 --rc genhtml_branch_coverage=1 00:40:27.374 --rc genhtml_function_coverage=1 00:40:27.374 --rc genhtml_legend=1 00:40:27.374 --rc geninfo_all_blocks=1 00:40:27.374 --rc geninfo_unexecuted_blocks=1 00:40:27.374 00:40:27.374 ' 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:27.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.374 --rc genhtml_branch_coverage=1 00:40:27.374 --rc genhtml_function_coverage=1 00:40:27.374 --rc genhtml_legend=1 00:40:27.374 --rc geninfo_all_blocks=1 00:40:27.374 --rc geninfo_unexecuted_blocks=1 00:40:27.374 00:40:27.374 ' 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:27.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.374 --rc genhtml_branch_coverage=1 00:40:27.374 --rc genhtml_function_coverage=1 00:40:27.374 --rc genhtml_legend=1 00:40:27.374 --rc geninfo_all_blocks=1 00:40:27.374 --rc geninfo_unexecuted_blocks=1 00:40:27.374 00:40:27.374 ' 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:27.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.374 --rc genhtml_branch_coverage=1 00:40:27.374 --rc genhtml_function_coverage=1 00:40:27.374 --rc genhtml_legend=1 00:40:27.374 --rc geninfo_all_blocks=1 00:40:27.374 --rc geninfo_unexecuted_blocks=1 00:40:27.374 00:40:27.374 ' 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.374 08:14:20 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:27.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=949083 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 949083 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 949083 ']' 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:27.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:27.375 [2024-11-18 08:14:20.149534] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:40:27.375 [2024-11-18 08:14:20.149618] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949083 ] 00:40:27.375 [2024-11-18 08:14:20.222578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:27.375 [2024-11-18 08:14:20.272574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:27.375 [2024-11-18 08:14:20.272579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:27.375 08:14:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:27.375 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:27.375 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:40:27.375 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:40:27.375 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:40:27.375 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:40:27.375 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:40:27.375 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:27.375 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:40:27.375 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:40:27.375 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:27.375 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:27.375 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:40:27.375 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:27.375 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:27.375 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:40:27.375 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:27.375 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:27.375 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:27.375 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:27.375 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:40:27.375 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:40:27.375 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:27.375 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:40:27.375 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:27.375 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:40:27.375 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:40:27.375 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:40:27.375 ' 00:40:30.668 [2024-11-18 08:14:23.035085] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:31.236 [2024-11-18 08:14:24.307359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:33.770 [2024-11-18 08:14:26.650409] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:35.677 [2024-11-18 08:14:28.664670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:37.672 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:37.672 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:37.672 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:37.672 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:37.672 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:37.672 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:37.672 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:37.672 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:37.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:37.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:37.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:37.672 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:37.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:37.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:37.672 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:37.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:37.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:37.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:37.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:37.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:37.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:37.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:37.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:37.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:37.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:37.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:37.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:37.672 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:37.672 08:14:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:37.672 08:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:37.672 08:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:37.672 08:14:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:37.672 08:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:37.672 08:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:37.672 08:14:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:37.672 08:14:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:37.932 08:14:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:37.932 08:14:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:37.932 08:14:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:37.932 08:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:37.932 08:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:37.932 08:14:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:37.932 08:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:37.932 08:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:37.932 08:14:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:37.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:37.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:37.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:37.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:37.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:37.932 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:37.932 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:37.932 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:37.932 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:37.932 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:37.932 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:37.932 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:37.932 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:37.932 ' 00:40:43.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:43.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:43.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:43.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:43.204 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:43.204 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:43.204 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:43.204 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:43.204 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:43.204 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:43.204 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:43.204 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:43.204 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:43.204 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:43.204 08:14:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:43.204 08:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:43.204 08:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:43.204 08:14:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 949083 00:40:43.204 08:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 949083 ']' 00:40:43.204 08:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 949083 00:40:43.204 08:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:40:43.204 08:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:43.204 08:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 949083 00:40:43.462 08:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:43.462 08:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:43.462 08:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 949083' 00:40:43.462 killing process with pid 949083 00:40:43.462 08:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 949083 00:40:43.462 08:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 949083 00:40:43.462 08:14:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:43.462 08:14:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:43.462 08:14:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 949083 ']' 00:40:43.462 08:14:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 949083 00:40:43.462 08:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 949083 ']' 00:40:43.462 08:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 949083 00:40:43.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (949083) - No such process 00:40:43.462 08:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 949083 is not found' 00:40:43.462 Process with pid 949083 is not found 00:40:43.462 08:14:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:43.462 08:14:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:43.462 08:14:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:43.462 00:40:43.462 real 0m16.563s 00:40:43.462 user 0m35.302s 00:40:43.462 sys 0m0.788s 00:40:43.462 08:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:43.462 08:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:43.462 ************************************ 00:40:43.462 END TEST spdkcli_nvmf_tcp 00:40:43.462 ************************************ 00:40:43.462 08:14:36 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:43.462 08:14:36 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:43.462 08:14:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:43.462 08:14:36 -- common/autotest_common.sh@10 -- # set +x 00:40:43.462 ************************************ 00:40:43.462 START TEST nvmf_identify_passthru 00:40:43.462 ************************************ 00:40:43.462 08:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:43.721 * Looking for test storage... 00:40:43.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:43.722 08:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:43.722 08:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:40:43.722 08:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:43.722 08:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:43.722 08:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:43.722 08:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:43.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:43.722 --rc genhtml_branch_coverage=1 00:40:43.722 --rc genhtml_function_coverage=1 00:40:43.722 --rc genhtml_legend=1 00:40:43.722 --rc geninfo_all_blocks=1 00:40:43.722 --rc geninfo_unexecuted_blocks=1 00:40:43.722 00:40:43.722 ' 00:40:43.722 08:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:43.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:43.722 --rc genhtml_branch_coverage=1 00:40:43.722 --rc genhtml_function_coverage=1 00:40:43.722 --rc genhtml_legend=1 00:40:43.722 --rc geninfo_all_blocks=1 00:40:43.722 --rc geninfo_unexecuted_blocks=1 00:40:43.722 00:40:43.722 ' 00:40:43.722 08:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:43.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:43.722 --rc genhtml_branch_coverage=1 00:40:43.722 --rc genhtml_function_coverage=1 00:40:43.722 --rc genhtml_legend=1 00:40:43.722 --rc geninfo_all_blocks=1 00:40:43.722 --rc geninfo_unexecuted_blocks=1 00:40:43.722 00:40:43.722 ' 00:40:43.722 08:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:43.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:43.722 --rc genhtml_branch_coverage=1 00:40:43.722 --rc genhtml_function_coverage=1 00:40:43.722 --rc genhtml_legend=1 00:40:43.722 --rc geninfo_all_blocks=1 00:40:43.722 --rc geninfo_unexecuted_blocks=1 00:40:43.722 00:40:43.722 ' 00:40:43.722 08:14:36 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:43.722 08:14:36 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:43.722 08:14:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:43.722 08:14:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:43.722 08:14:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:43.722 08:14:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:43.722 08:14:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:43.722 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:43.723 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:43.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:43.723 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:43.723 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:43.723 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:43.723 08:14:36 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:43.723 08:14:36 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:43.723 08:14:36 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:43.723 08:14:36 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:43.723 08:14:36 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:43.723 08:14:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:43.723 08:14:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:43.723 08:14:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:43.723 08:14:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:43.723 08:14:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:43.723 08:14:36 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:43.723 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:43.723 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:43.723 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:43.723 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:43.723 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:43.723 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:43.723 08:14:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:43.723 08:14:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:43.723 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:43.723 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:43.723 08:14:36 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:40:43.723 08:14:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:46.264 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:46.264 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:46.264 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:46.264 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:46.264 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:46.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:46.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:40:46.265 00:40:46.265 --- 10.0.0.2 ping statistics --- 00:40:46.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:46.265 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:46.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:46.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:40:46.265 00:40:46.265 --- 10.0.0.1 ping statistics --- 00:40:46.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:46.265 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:46.265 08:14:38 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:46.265 08:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:46.265 08:14:38 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:46.265 08:14:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:46.265 08:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:46.265 08:14:38 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:40:46.265 08:14:38 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:40:46.265 08:14:38 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:40:46.265 08:14:38 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:40:46.265 08:14:38 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:40:46.265 08:14:38 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:40:46.265 08:14:38 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:46.265 08:14:38 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:46.265 08:14:38 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:40:46.265 08:14:38 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:40:46.265 08:14:38 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:40:46.265 08:14:38 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:40:46.265 08:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:40:46.265 08:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:40:46.265 08:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:46.265 08:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:46.265 08:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:50.457 08:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:40:50.457 08:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:50.457 08:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:50.457 08:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:54.649 08:14:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:40:54.649 08:14:47 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:54.649 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:54.649 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:54.649 08:14:47 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:54.649 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:54.649 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:54.649 08:14:47 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=953594 00:40:54.649 08:14:47 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:54.649 08:14:47 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:54.649 08:14:47 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 953594 00:40:54.649 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 953594 ']' 00:40:54.649 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:54.649 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:54.649 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:54.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:54.649 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:54.649 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:54.649 [2024-11-18 08:14:47.494089] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:40:54.649 [2024-11-18 08:14:47.494196] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:54.650 [2024-11-18 08:14:47.569552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:54.650 [2024-11-18 08:14:47.618673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:54.650 [2024-11-18 08:14:47.618733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:54.650 [2024-11-18 08:14:47.618765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:54.650 [2024-11-18 08:14:47.618777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:54.650 [2024-11-18 08:14:47.618787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:54.650 [2024-11-18 08:14:47.620339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:54.650 [2024-11-18 08:14:47.620403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:54.650 [2024-11-18 08:14:47.620406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:54.650 [2024-11-18 08:14:47.620374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:54.908 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:54.908 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:40:54.908 08:14:47 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:54.908 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:54.908 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:54.908 INFO: Log level set to 20 00:40:54.908 INFO: Requests: 00:40:54.908 { 00:40:54.908 "jsonrpc": "2.0", 00:40:54.908 "method": "nvmf_set_config", 00:40:54.908 "id": 1, 00:40:54.908 "params": { 00:40:54.908 "admin_cmd_passthru": { 00:40:54.908 "identify_ctrlr": true 00:40:54.908 } 00:40:54.908 } 00:40:54.908 } 00:40:54.908 00:40:54.908 INFO: response: 00:40:54.908 { 00:40:54.908 "jsonrpc": "2.0", 00:40:54.908 "id": 1, 00:40:54.908 "result": true 00:40:54.908 } 00:40:54.908 00:40:54.908 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:54.908 08:14:47 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:54.908 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:54.908 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:54.908 INFO: Setting log level to 20 00:40:54.908 INFO: Setting log level to 20 00:40:54.908 INFO: Log level set to 20 00:40:54.908 INFO: Log level set to 20 00:40:54.908 INFO: Requests: 00:40:54.908 { 00:40:54.908 "jsonrpc": "2.0", 00:40:54.908 "method": "framework_start_init", 00:40:54.908 "id": 1 00:40:54.908 } 00:40:54.908 00:40:54.908 INFO: Requests: 00:40:54.908 { 00:40:54.908 "jsonrpc": "2.0", 00:40:54.908 "method": "framework_start_init", 00:40:54.908 "id": 1 00:40:54.908 } 00:40:54.908 00:40:54.908 [2024-11-18 08:14:47.883503] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:54.908 INFO: response: 00:40:54.908 { 00:40:54.908 "jsonrpc": "2.0", 00:40:54.908 "id": 1, 00:40:54.908 "result": true 00:40:54.908 } 00:40:54.908 00:40:54.908 INFO: response: 00:40:54.908 { 00:40:54.908 "jsonrpc": "2.0", 00:40:54.908 "id": 1, 00:40:54.908 "result": true 00:40:54.908 } 00:40:54.908 00:40:54.908 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:54.908 08:14:47 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:54.908 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:54.908 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:54.908 INFO: Setting log level to 40 00:40:54.908 INFO: Setting log level to 40 00:40:54.908 INFO: Setting log level to 40 00:40:54.908 [2024-11-18 08:14:47.893446] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:54.908 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:54.908 08:14:47 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:54.908 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:54.908 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:54.908 08:14:47 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:40:54.908 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:54.908 08:14:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:58.196 Nvme0n1 00:40:58.197 08:14:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.197 08:14:50 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:58.197 08:14:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.197 08:14:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:58.197 08:14:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.197 08:14:50 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:58.197 08:14:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.197 08:14:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:58.197 08:14:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.197 08:14:50 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:58.197 08:14:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.197 08:14:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:58.197 [2024-11-18 08:14:50.800393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:58.197 08:14:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.197 08:14:50 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:58.197 08:14:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.197 08:14:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:58.197 [ 00:40:58.197 { 00:40:58.197 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:58.197 "subtype": "Discovery", 00:40:58.197 "listen_addresses": [], 00:40:58.197 "allow_any_host": true, 00:40:58.197 "hosts": [] 00:40:58.197 }, 00:40:58.197 { 00:40:58.197 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:58.197 "subtype": "NVMe", 00:40:58.197 "listen_addresses": [ 00:40:58.197 { 00:40:58.197 "trtype": "TCP", 00:40:58.197 "adrfam": "IPv4", 00:40:58.197 "traddr": "10.0.0.2", 00:40:58.197 "trsvcid": "4420" 00:40:58.197 } 00:40:58.197 ], 00:40:58.197 "allow_any_host": true, 00:40:58.197 "hosts": [], 00:40:58.197 "serial_number": "SPDK00000000000001", 00:40:58.197 "model_number": "SPDK bdev Controller", 00:40:58.197 "max_namespaces": 1, 00:40:58.197 "min_cntlid": 1, 00:40:58.197 "max_cntlid": 65519, 00:40:58.197 "namespaces": [ 00:40:58.197 { 00:40:58.197 "nsid": 1, 00:40:58.197 "bdev_name": "Nvme0n1", 00:40:58.197 "name": "Nvme0n1", 00:40:58.197 "nguid": "B3666B3231DF4935A2452A07A2251502", 00:40:58.197 "uuid": "b3666b32-31df-4935-a245-2a07a2251502" 00:40:58.197 } 00:40:58.197 ] 00:40:58.197 } 00:40:58.197 ] 00:40:58.197 08:14:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.197 08:14:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:58.197 08:14:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:58.197 08:14:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:58.197 08:14:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:40:58.197 08:14:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:58.197 08:14:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:58.197 08:14:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:58.197 08:14:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:40:58.197 08:14:51 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:40:58.197 08:14:51 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:40:58.197 08:14:51 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:58.197 08:14:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.197 08:14:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:58.197 08:14:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.197 08:14:51 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:58.197 08:14:51 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:58.197 08:14:51 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:58.197 08:14:51 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:40:58.197 08:14:51 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:58.197 08:14:51 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:40:58.197 08:14:51 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:58.197 08:14:51 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:58.197 rmmod nvme_tcp 00:40:58.197 rmmod nvme_fabrics 00:40:58.197 rmmod nvme_keyring 00:40:58.197 08:14:51 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:58.197 08:14:51 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:40:58.197 08:14:51 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:40:58.197 08:14:51 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 953594 ']' 00:40:58.197 08:14:51 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 953594 00:40:58.197 08:14:51 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 953594 ']' 00:40:58.197 08:14:51 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 953594 00:40:58.197 08:14:51 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:40:58.197 08:14:51 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:58.197 08:14:51 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 953594 00:40:58.457 08:14:51 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:58.457 08:14:51 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:58.457 08:14:51 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 953594' 00:40:58.457 killing process with pid 953594 00:40:58.457 08:14:51 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 953594 00:40:58.457 08:14:51 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 953594 00:40:59.832 08:14:52 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:59.832 08:14:52 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:59.832 08:14:52 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:59.832 08:14:52 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:40:59.832 08:14:52 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:40:59.832 08:14:52 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:59.832 08:14:52 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:40:59.832 08:14:52 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:59.832 08:14:52 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:59.832 08:14:52 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:59.832 08:14:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:59.832 08:14:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:02.369 08:14:54 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:02.369 00:41:02.369 real 0m18.308s 00:41:02.369 user 0m27.391s 00:41:02.369 sys 0m2.432s 00:41:02.369 08:14:54 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:02.369 08:14:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:02.369 ************************************ 00:41:02.369 END TEST nvmf_identify_passthru 00:41:02.369 ************************************ 00:41:02.369 08:14:54 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:02.369 08:14:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:02.369 08:14:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:02.369 08:14:54 -- common/autotest_common.sh@10 -- # set +x 00:41:02.369 ************************************ 00:41:02.369 START TEST nvmf_dif 00:41:02.369 ************************************ 00:41:02.369 08:14:54 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:02.369 * Looking for test storage... 00:41:02.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:02.369 08:14:54 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:02.369 08:14:54 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:41:02.369 08:14:54 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:02.369 08:14:55 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:41:02.369 08:14:55 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:02.369 08:14:55 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:02.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:02.369 --rc genhtml_branch_coverage=1 00:41:02.369 --rc genhtml_function_coverage=1 00:41:02.369 --rc genhtml_legend=1 00:41:02.369 --rc geninfo_all_blocks=1 00:41:02.369 --rc geninfo_unexecuted_blocks=1 00:41:02.369 00:41:02.369 ' 00:41:02.369 08:14:55 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:02.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:02.369 --rc genhtml_branch_coverage=1 00:41:02.369 --rc genhtml_function_coverage=1 00:41:02.369 --rc genhtml_legend=1 00:41:02.369 --rc geninfo_all_blocks=1 00:41:02.369 --rc geninfo_unexecuted_blocks=1 00:41:02.369 00:41:02.369 ' 00:41:02.369 08:14:55 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:02.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:02.369 --rc genhtml_branch_coverage=1 00:41:02.369 --rc genhtml_function_coverage=1 00:41:02.369 --rc genhtml_legend=1 00:41:02.369 --rc geninfo_all_blocks=1 00:41:02.369 --rc geninfo_unexecuted_blocks=1 00:41:02.369 00:41:02.369 ' 00:41:02.369 08:14:55 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:02.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:02.369 --rc genhtml_branch_coverage=1 00:41:02.369 --rc genhtml_function_coverage=1 00:41:02.369 --rc genhtml_legend=1 00:41:02.369 --rc geninfo_all_blocks=1 00:41:02.369 --rc geninfo_unexecuted_blocks=1 00:41:02.369 00:41:02.369 ' 00:41:02.369 08:14:55 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:02.369 08:14:55 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:41:02.369 08:14:55 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:02.369 08:14:55 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:02.369 08:14:55 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:02.369 08:14:55 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:02.369 08:14:55 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:02.369 08:14:55 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:02.369 08:14:55 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:02.369 08:14:55 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:02.369 08:14:55 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:02.369 08:14:55 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:02.369 08:14:55 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:02.369 08:14:55 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:02.369 08:14:55 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:02.369 08:14:55 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:02.369 08:14:55 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:02.369 08:14:55 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:02.369 08:14:55 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:02.369 08:14:55 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:02.369 08:14:55 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:02.369 08:14:55 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:02.370 08:14:55 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:02.370 08:14:55 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:41:02.370 08:14:55 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:02.370 08:14:55 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:41:02.370 08:14:55 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:02.370 08:14:55 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:02.370 08:14:55 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:02.370 08:14:55 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:02.370 08:14:55 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:02.370 08:14:55 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:02.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:02.370 08:14:55 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:02.370 08:14:55 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:02.370 08:14:55 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:02.370 08:14:55 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:41:02.370 08:14:55 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:41:02.370 08:14:55 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:41:02.370 08:14:55 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:41:02.370 08:14:55 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:41:02.370 08:14:55 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:02.370 08:14:55 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:02.370 08:14:55 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:02.370 08:14:55 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:02.370 08:14:55 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:02.370 08:14:55 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:02.370 08:14:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:02.370 08:14:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:02.370 08:14:55 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:02.370 08:14:55 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:02.370 08:14:55 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:41:02.370 08:14:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:04.273 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:04.273 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:04.273 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:04.273 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:04.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:04.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:41:04.273 00:41:04.273 --- 10.0.0.2 ping statistics --- 00:41:04.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:04.273 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:04.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:04.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:41:04.273 00:41:04.273 --- 10.0.0.1 ping statistics --- 00:41:04.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:04.273 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:41:04.273 08:14:57 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:05.649 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:41:05.649 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:41:05.649 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:41:05.649 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:41:05.649 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:41:05.649 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:41:05.649 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:41:05.649 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:41:05.649 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:41:05.649 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:41:05.649 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:41:05.649 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:41:05.649 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:41:05.649 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:41:05.649 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:41:05.649 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:41:05.649 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:41:05.649 08:14:58 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:05.649 08:14:58 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:05.649 08:14:58 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:05.649 08:14:58 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:05.649 08:14:58 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:05.649 08:14:58 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:05.649 08:14:58 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:41:05.649 08:14:58 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:41:05.649 08:14:58 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:05.649 08:14:58 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:05.649 08:14:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:05.649 08:14:58 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=956859 00:41:05.649 08:14:58 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:41:05.649 08:14:58 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 956859 00:41:05.649 08:14:58 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 956859 ']' 00:41:05.649 08:14:58 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:05.649 08:14:58 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:05.649 08:14:58 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:05.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:05.649 08:14:58 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:05.649 08:14:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:05.649 [2024-11-18 08:14:58.618169] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:41:05.649 [2024-11-18 08:14:58.618240] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:05.649 [2024-11-18 08:14:58.688650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:05.649 [2024-11-18 08:14:58.731955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:05.649 [2024-11-18 08:14:58.732019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:05.649 [2024-11-18 08:14:58.732053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:05.649 [2024-11-18 08:14:58.732065] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:05.649 [2024-11-18 08:14:58.732075] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:05.649 [2024-11-18 08:14:58.732663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:05.908 08:14:58 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:05.908 08:14:58 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:41:05.908 08:14:58 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:05.908 08:14:58 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:05.908 08:14:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:05.908 08:14:58 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:05.908 08:14:58 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:41:05.908 08:14:58 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:41:05.908 08:14:58 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:05.908 08:14:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:05.908 [2024-11-18 08:14:58.879410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:05.908 08:14:58 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:05.908 08:14:58 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:41:05.908 08:14:58 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:05.908 08:14:58 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:05.908 08:14:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:05.908 ************************************ 00:41:05.908 START TEST fio_dif_1_default 00:41:05.908 ************************************ 00:41:05.908 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:41:05.908 08:14:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:41:05.908 08:14:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:41:05.908 08:14:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:41:05.908 08:14:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:41:05.908 08:14:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:41:05.908 08:14:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:05.908 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:05.908 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:05.908 bdev_null0 00:41:05.908 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:05.908 08:14:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:05.908 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:05.908 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:05.908 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:05.908 08:14:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:05.909 [2024-11-18 08:14:58.935741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:05.909 { 00:41:05.909 "params": { 00:41:05.909 "name": "Nvme$subsystem", 00:41:05.909 "trtype": "$TEST_TRANSPORT", 00:41:05.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:05.909 "adrfam": "ipv4", 00:41:05.909 "trsvcid": "$NVMF_PORT", 00:41:05.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:05.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:05.909 "hdgst": ${hdgst:-false}, 00:41:05.909 "ddgst": ${ddgst:-false} 00:41:05.909 }, 00:41:05.909 "method": "bdev_nvme_attach_controller" 00:41:05.909 } 00:41:05.909 EOF 00:41:05.909 )") 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:05.909 "params": { 00:41:05.909 "name": "Nvme0", 00:41:05.909 "trtype": "tcp", 00:41:05.909 "traddr": "10.0.0.2", 00:41:05.909 "adrfam": "ipv4", 00:41:05.909 "trsvcid": "4420", 00:41:05.909 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:05.909 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:05.909 "hdgst": false, 00:41:05.909 "ddgst": false 00:41:05.909 }, 00:41:05.909 "method": "bdev_nvme_attach_controller" 00:41:05.909 }' 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:05.909 08:14:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:06.169 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:06.169 fio-3.35 00:41:06.169 Starting 1 thread 00:41:18.373 00:41:18.373 filename0: (groupid=0, jobs=1): err= 0: pid=957089: Mon Nov 18 08:15:09 2024 00:41:18.373 read: IOPS=331, BW=1326KiB/s (1358kB/s)(13.0MiB/10029msec) 00:41:18.373 slat (nsec): min=3781, max=42193, avg=9417.65, stdev=2605.66 00:41:18.373 clat (usec): min=487, max=48280, avg=12038.40, stdev=18303.03 00:41:18.373 lat (usec): min=495, max=48292, avg=12047.82, stdev=18302.99 00:41:18.373 clat percentiles (usec): 00:41:18.373 | 1.00th=[ 523], 5.00th=[ 545], 10.00th=[ 553], 20.00th=[ 578], 00:41:18.373 | 30.00th=[ 594], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 676], 00:41:18.373 | 70.00th=[ 750], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:18.373 | 99.00th=[42206], 99.50th=[42730], 99.90th=[48497], 99.95th=[48497], 00:41:18.373 | 99.99th=[48497] 00:41:18.373 bw ( KiB/s): min= 544, max= 5120, per=100.00%, avg=1328.00, stdev=1029.59, samples=20 00:41:18.374 iops : min= 136, max= 1280, avg=332.00, stdev=257.40, samples=20 00:41:18.374 lat (usec) : 500=0.09%, 750=69.83%, 1000=2.05% 00:41:18.374 lat (msec) : 50=28.04% 00:41:18.374 cpu : usr=90.94%, sys=8.76%, ctx=16, majf=0, minf=220 00:41:18.374 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:18.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:18.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:18.374 issued rwts: total=3324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:18.374 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:18.374 00:41:18.374 Run status group 0 (all jobs): 00:41:18.374 READ: bw=1326KiB/s (1358kB/s), 1326KiB/s-1326KiB/s (1358kB/s-1358kB/s), io=13.0MiB (13.6MB), run=10029-10029msec 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.374 00:41:18.374 real 0m11.011s 00:41:18.374 user 0m10.151s 00:41:18.374 sys 0m1.137s 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:18.374 ************************************ 00:41:18.374 END TEST fio_dif_1_default 00:41:18.374 ************************************ 00:41:18.374 08:15:09 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:41:18.374 08:15:09 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:18.374 08:15:09 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:18.374 08:15:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:18.374 ************************************ 00:41:18.374 START TEST fio_dif_1_multi_subsystems 00:41:18.374 ************************************ 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:41:18.374 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:18.375 bdev_null0 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:18.375 [2024-11-18 08:15:09.986393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:18.375 bdev_null1 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.375 08:15:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:18.375 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.375 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:18.375 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.375 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:18.375 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.375 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:18.375 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.375 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:18.375 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.375 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:41:18.375 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:41:18.375 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:18.376 { 00:41:18.376 "params": { 00:41:18.376 "name": "Nvme$subsystem", 00:41:18.376 "trtype": "$TEST_TRANSPORT", 00:41:18.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:18.376 "adrfam": "ipv4", 00:41:18.376 "trsvcid": "$NVMF_PORT", 00:41:18.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:18.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:18.376 "hdgst": ${hdgst:-false}, 00:41:18.376 "ddgst": ${ddgst:-false} 00:41:18.376 }, 00:41:18.376 "method": "bdev_nvme_attach_controller" 00:41:18.376 } 00:41:18.376 EOF 00:41:18.376 )") 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:18.376 { 00:41:18.376 "params": { 00:41:18.376 "name": "Nvme$subsystem", 00:41:18.376 "trtype": "$TEST_TRANSPORT", 00:41:18.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:18.376 "adrfam": "ipv4", 00:41:18.376 "trsvcid": "$NVMF_PORT", 00:41:18.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:18.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:18.376 "hdgst": ${hdgst:-false}, 00:41:18.376 "ddgst": ${ddgst:-false} 00:41:18.376 }, 00:41:18.376 "method": "bdev_nvme_attach_controller" 00:41:18.376 } 00:41:18.376 EOF 00:41:18.376 )") 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:18.376 "params": { 00:41:18.376 "name": "Nvme0", 00:41:18.376 "trtype": "tcp", 00:41:18.376 "traddr": "10.0.0.2", 00:41:18.376 "adrfam": "ipv4", 00:41:18.376 "trsvcid": "4420", 00:41:18.376 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:18.376 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:18.376 "hdgst": false, 00:41:18.376 "ddgst": false 00:41:18.376 }, 00:41:18.376 "method": "bdev_nvme_attach_controller" 00:41:18.376 },{ 00:41:18.376 "params": { 00:41:18.376 "name": "Nvme1", 00:41:18.376 "trtype": "tcp", 00:41:18.376 "traddr": "10.0.0.2", 00:41:18.376 "adrfam": "ipv4", 00:41:18.376 "trsvcid": "4420", 00:41:18.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:18.376 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:18.376 "hdgst": false, 00:41:18.376 "ddgst": false 00:41:18.376 }, 00:41:18.376 "method": "bdev_nvme_attach_controller" 00:41:18.376 }' 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:18.376 08:15:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:18.376 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:18.376 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:18.376 fio-3.35 00:41:18.376 Starting 2 threads 00:41:28.384 00:41:28.384 filename0: (groupid=0, jobs=1): err= 0: pid=959106: Mon Nov 18 08:15:21 2024 00:41:28.384 read: IOPS=200, BW=804KiB/s (823kB/s)(8064KiB/10032msec) 00:41:28.384 slat (nsec): min=6976, max=27079, avg=9583.90, stdev=2327.92 00:41:28.384 clat (usec): min=520, max=42406, avg=19873.36, stdev=20354.20 00:41:28.384 lat (usec): min=528, max=42418, avg=19882.94, stdev=20354.06 00:41:28.384 clat percentiles (usec): 00:41:28.384 | 1.00th=[ 553], 5.00th=[ 570], 10.00th=[ 578], 20.00th=[ 594], 00:41:28.384 | 30.00th=[ 619], 40.00th=[ 644], 50.00th=[ 685], 60.00th=[41157], 00:41:28.384 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:41:28.384 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:28.384 | 99.99th=[42206] 00:41:28.384 bw ( KiB/s): min= 736, max= 896, per=30.68%, avg=804.80, stdev=51.15, samples=20 00:41:28.384 iops : min= 184, max= 224, avg=201.20, stdev=12.79, samples=20 00:41:28.384 lat (usec) : 750=50.00%, 1000=2.78% 00:41:28.384 lat (msec) : 50=47.22% 00:41:28.384 cpu : usr=94.73%, sys=4.96%, ctx=12, majf=0, minf=88 00:41:28.384 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:28.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.384 issued rwts: total=2016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.384 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:28.384 filename1: (groupid=0, jobs=1): err= 0: pid=959107: Mon Nov 18 08:15:21 2024 00:41:28.384 read: IOPS=454, BW=1817KiB/s (1861kB/s)(17.8MiB/10028msec) 00:41:28.384 slat (usec): min=6, max=176, avg= 9.79, stdev= 3.73 00:41:28.384 clat (usec): min=513, max=42632, avg=8773.03, stdev=16290.85 00:41:28.384 lat (usec): min=521, max=42645, avg=8782.82, stdev=16290.77 00:41:28.384 clat percentiles (usec): 00:41:28.384 | 1.00th=[ 545], 5.00th=[ 562], 10.00th=[ 570], 20.00th=[ 578], 00:41:28.384 | 30.00th=[ 586], 40.00th=[ 594], 50.00th=[ 603], 60.00th=[ 611], 00:41:28.384 | 70.00th=[ 627], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:41:28.384 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:41:28.384 | 99.99th=[42730] 00:41:28.384 bw ( KiB/s): min= 1472, max= 2112, per=69.45%, avg=1820.80, stdev=205.01, samples=20 00:41:28.384 iops : min= 368, max= 528, avg=455.20, stdev=51.25, samples=20 00:41:28.384 lat (usec) : 750=78.62%, 1000=1.14% 00:41:28.384 lat (msec) : 2=0.04%, 4=0.09%, 50=20.11% 00:41:28.384 cpu : usr=94.80%, sys=4.53%, ctx=102, majf=0, minf=236 00:41:28.384 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:28.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.384 issued rwts: total=4556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.384 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:28.384 00:41:28.384 Run status group 0 (all jobs): 00:41:28.384 READ: bw=2620KiB/s (2683kB/s), 804KiB/s-1817KiB/s (823kB/s-1861kB/s), io=25.7MiB (26.9MB), run=10028-10032msec 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.643 00:41:28.643 real 0m11.602s 00:41:28.643 user 0m20.674s 00:41:28.643 sys 0m1.303s 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:28.643 08:15:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:28.643 ************************************ 00:41:28.643 END TEST fio_dif_1_multi_subsystems 00:41:28.643 ************************************ 00:41:28.643 08:15:21 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:28.643 08:15:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:28.643 08:15:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:28.643 08:15:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:28.643 ************************************ 00:41:28.643 START TEST fio_dif_rand_params 00:41:28.643 ************************************ 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:28.643 bdev_null0 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:28.643 [2024-11-18 08:15:21.645982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:28.643 08:15:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:28.644 { 00:41:28.644 "params": { 00:41:28.644 "name": "Nvme$subsystem", 00:41:28.644 "trtype": "$TEST_TRANSPORT", 00:41:28.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:28.644 "adrfam": "ipv4", 00:41:28.644 "trsvcid": "$NVMF_PORT", 00:41:28.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:28.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:28.644 "hdgst": ${hdgst:-false}, 00:41:28.644 "ddgst": ${ddgst:-false} 00:41:28.644 }, 00:41:28.644 "method": "bdev_nvme_attach_controller" 00:41:28.644 } 00:41:28.644 EOF 00:41:28.644 )") 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:28.644 "params": { 00:41:28.644 "name": "Nvme0", 00:41:28.644 "trtype": "tcp", 00:41:28.644 "traddr": "10.0.0.2", 00:41:28.644 "adrfam": "ipv4", 00:41:28.644 "trsvcid": "4420", 00:41:28.644 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:28.644 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:28.644 "hdgst": false, 00:41:28.644 "ddgst": false 00:41:28.644 }, 00:41:28.644 "method": "bdev_nvme_attach_controller" 00:41:28.644 }' 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:28.644 08:15:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:28.904 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:28.904 ... 00:41:28.904 fio-3.35 00:41:28.904 Starting 3 threads 00:41:35.461 00:41:35.461 filename0: (groupid=0, jobs=1): err= 0: pid=960499: Mon Nov 18 08:15:27 2024 00:41:35.461 read: IOPS=205, BW=25.7MiB/s (27.0MB/s)(129MiB/5023msec) 00:41:35.461 slat (usec): min=4, max=171, avg=16.36, stdev= 7.14 00:41:35.461 clat (usec): min=4643, max=58279, avg=14551.18, stdev=11056.47 00:41:35.461 lat (usec): min=4657, max=58306, avg=14567.54, stdev=11056.16 00:41:35.461 clat percentiles (usec): 00:41:35.461 | 1.00th=[ 5014], 5.00th=[ 5800], 10.00th=[ 8356], 20.00th=[ 9110], 00:41:35.461 | 30.00th=[ 9765], 40.00th=[11207], 50.00th=[12256], 60.00th=[12780], 00:41:35.461 | 70.00th=[13435], 80.00th=[14222], 90.00th=[16581], 95.00th=[49546], 00:41:35.461 | 99.00th=[54264], 99.50th=[55313], 99.90th=[58459], 99.95th=[58459], 00:41:35.461 | 99.99th=[58459] 00:41:35.461 bw ( KiB/s): min=21248, max=30976, per=32.73%, avg=26168.89, stdev=3785.81, samples=9 00:41:35.461 iops : min= 166, max= 242, avg=204.44, stdev=29.58, samples=9 00:41:35.461 lat (msec) : 10=32.30%, 20=59.57%, 50=4.16%, 100=3.97% 00:41:35.461 cpu : usr=90.56%, sys=6.85%, ctx=279, majf=0, minf=110 00:41:35.461 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:35.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.461 issued rwts: total=1034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:35.461 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:35.461 filename0: (groupid=0, jobs=1): err= 0: pid=960500: Mon Nov 18 08:15:27 2024 00:41:35.461 read: IOPS=223, BW=27.9MiB/s (29.3MB/s)(140MiB/5002msec) 00:41:35.461 slat (nsec): min=4417, max=54420, avg=14710.44, stdev=4101.60 00:41:35.461 clat (usec): min=4226, max=93396, avg=13401.35, stdev=9821.60 00:41:35.461 lat (usec): min=4238, max=93406, avg=13416.06, stdev=9821.66 00:41:35.461 clat percentiles (usec): 00:41:35.461 | 1.00th=[ 4948], 5.00th=[ 5538], 10.00th=[ 7570], 20.00th=[ 8717], 00:41:35.461 | 30.00th=[ 9241], 40.00th=[10159], 50.00th=[11863], 60.00th=[12911], 00:41:35.461 | 70.00th=[13698], 80.00th=[14746], 90.00th=[16319], 95.00th=[44827], 00:41:35.461 | 99.00th=[53216], 99.50th=[55837], 99.90th=[93848], 99.95th=[93848], 00:41:35.461 | 99.99th=[93848] 00:41:35.461 bw ( KiB/s): min=17442, max=34304, per=36.16%, avg=28903.33, stdev=5752.16, samples=9 00:41:35.461 iops : min= 136, max= 268, avg=225.78, stdev=45.00, samples=9 00:41:35.461 lat (msec) : 10=38.82%, 20=56.17%, 50=3.13%, 100=1.88% 00:41:35.461 cpu : usr=88.96%, sys=7.80%, ctx=235, majf=0, minf=112 00:41:35.461 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:35.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.461 issued rwts: total=1118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:35.461 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:35.461 filename0: (groupid=0, jobs=1): err= 0: pid=960501: Mon Nov 18 08:15:27 2024 00:41:35.461 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(123MiB/5012msec) 00:41:35.461 slat (nsec): min=4240, max=41718, avg=15114.78, stdev=5246.99 00:41:35.461 clat (usec): min=4724, max=55042, avg=15244.09, stdev=12781.60 00:41:35.461 lat (usec): min=4737, max=55056, avg=15259.20, stdev=12781.33 00:41:35.461 clat percentiles (usec): 00:41:35.461 | 1.00th=[ 4948], 5.00th=[ 6587], 10.00th=[ 8029], 20.00th=[ 8848], 00:41:35.461 | 30.00th=[10159], 40.00th=[11076], 50.00th=[11731], 60.00th=[12125], 00:41:35.461 | 70.00th=[12649], 80.00th=[13173], 90.00th=[47973], 95.00th=[51643], 00:41:35.461 | 99.00th=[53740], 99.50th=[54264], 99.90th=[54789], 99.95th=[54789], 00:41:35.462 | 99.99th=[54789] 00:41:35.462 bw ( KiB/s): min=17152, max=32768, per=31.45%, avg=25139.20, stdev=5277.99, samples=10 00:41:35.462 iops : min= 134, max= 256, avg=196.40, stdev=41.23, samples=10 00:41:35.462 lat (msec) : 10=29.04%, 20=60.00%, 50=3.76%, 100=7.21% 00:41:35.462 cpu : usr=91.82%, sys=6.51%, ctx=247, majf=0, minf=92 00:41:35.462 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:35.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.462 issued rwts: total=985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:35.462 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:35.462 00:41:35.462 Run status group 0 (all jobs): 00:41:35.462 READ: bw=78.1MiB/s (81.9MB/s), 24.6MiB/s-27.9MiB/s (25.8MB/s-29.3MB/s), io=392MiB (411MB), run=5002-5023msec 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:35.462 bdev_null0 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:35.462 [2024-11-18 08:15:27.723693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:35.462 bdev_null1 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:35.462 bdev_null2 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:35.462 08:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:35.462 { 00:41:35.462 "params": { 00:41:35.462 "name": "Nvme$subsystem", 00:41:35.462 "trtype": "$TEST_TRANSPORT", 00:41:35.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:35.462 "adrfam": "ipv4", 00:41:35.462 "trsvcid": "$NVMF_PORT", 00:41:35.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:35.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:35.462 "hdgst": ${hdgst:-false}, 00:41:35.462 "ddgst": ${ddgst:-false} 00:41:35.462 }, 00:41:35.462 "method": "bdev_nvme_attach_controller" 00:41:35.462 } 00:41:35.462 EOF 00:41:35.462 )") 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:35.463 { 00:41:35.463 "params": { 00:41:35.463 "name": "Nvme$subsystem", 00:41:35.463 "trtype": "$TEST_TRANSPORT", 00:41:35.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:35.463 "adrfam": "ipv4", 00:41:35.463 "trsvcid": "$NVMF_PORT", 00:41:35.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:35.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:35.463 "hdgst": ${hdgst:-false}, 00:41:35.463 "ddgst": ${ddgst:-false} 00:41:35.463 }, 00:41:35.463 "method": "bdev_nvme_attach_controller" 00:41:35.463 } 00:41:35.463 EOF 00:41:35.463 )") 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:35.463 { 00:41:35.463 "params": { 00:41:35.463 "name": "Nvme$subsystem", 00:41:35.463 "trtype": "$TEST_TRANSPORT", 00:41:35.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:35.463 "adrfam": "ipv4", 00:41:35.463 "trsvcid": "$NVMF_PORT", 00:41:35.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:35.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:35.463 "hdgst": ${hdgst:-false}, 00:41:35.463 "ddgst": ${ddgst:-false} 00:41:35.463 }, 00:41:35.463 "method": "bdev_nvme_attach_controller" 00:41:35.463 } 00:41:35.463 EOF 00:41:35.463 )") 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:35.463 "params": { 00:41:35.463 "name": "Nvme0", 00:41:35.463 "trtype": "tcp", 00:41:35.463 "traddr": "10.0.0.2", 00:41:35.463 "adrfam": "ipv4", 00:41:35.463 "trsvcid": "4420", 00:41:35.463 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:35.463 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:35.463 "hdgst": false, 00:41:35.463 "ddgst": false 00:41:35.463 }, 00:41:35.463 "method": "bdev_nvme_attach_controller" 00:41:35.463 },{ 00:41:35.463 "params": { 00:41:35.463 "name": "Nvme1", 00:41:35.463 "trtype": "tcp", 00:41:35.463 "traddr": "10.0.0.2", 00:41:35.463 "adrfam": "ipv4", 00:41:35.463 "trsvcid": "4420", 00:41:35.463 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:35.463 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:35.463 "hdgst": false, 00:41:35.463 "ddgst": false 00:41:35.463 }, 00:41:35.463 "method": "bdev_nvme_attach_controller" 00:41:35.463 },{ 00:41:35.463 "params": { 00:41:35.463 "name": "Nvme2", 00:41:35.463 "trtype": "tcp", 00:41:35.463 "traddr": "10.0.0.2", 00:41:35.463 "adrfam": "ipv4", 00:41:35.463 "trsvcid": "4420", 00:41:35.463 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:35.463 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:35.463 "hdgst": false, 00:41:35.463 "ddgst": false 00:41:35.463 }, 00:41:35.463 "method": "bdev_nvme_attach_controller" 00:41:35.463 }' 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:35.463 08:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:35.463 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:35.463 ... 00:41:35.463 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:35.463 ... 00:41:35.463 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:35.463 ... 00:41:35.463 fio-3.35 00:41:35.463 Starting 24 threads 00:41:47.664 00:41:47.664 filename0: (groupid=0, jobs=1): err= 0: pid=961364: Mon Nov 18 08:15:39 2024 00:41:47.664 read: IOPS=458, BW=1835KiB/s (1879kB/s)(17.9MiB/10010msec) 00:41:47.664 slat (usec): min=8, max=270, avg=19.92, stdev=17.65 00:41:47.664 clat (usec): min=21694, max=55434, avg=34702.93, stdev=3836.67 00:41:47.664 lat (usec): min=21709, max=55447, avg=34722.85, stdev=3834.29 00:41:47.665 clat percentiles (usec): 00:41:47.665 | 1.00th=[24511], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:41:47.665 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:41:47.665 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[43779], 00:41:47.665 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:41:47.665 | 99.99th=[55313] 00:41:47.665 bw ( KiB/s): min= 1408, max= 2048, per=4.18%, avg=1830.40, stdev=186.19, samples=20 00:41:47.665 iops : min= 352, max= 512, avg=457.60, stdev=46.55, samples=20 00:41:47.665 lat (msec) : 50=99.96%, 100=0.04% 00:41:47.665 cpu : usr=96.94%, sys=1.98%, ctx=152, majf=0, minf=9 00:41:47.665 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:47.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.665 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.665 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.665 filename0: (groupid=0, jobs=1): err= 0: pid=961365: Mon Nov 18 08:15:39 2024 00:41:47.665 read: IOPS=456, BW=1825KiB/s (1869kB/s)(17.9MiB/10030msec) 00:41:47.665 slat (usec): min=9, max=133, avg=49.26, stdev=17.94 00:41:47.665 clat (usec): min=23374, max=63306, avg=34595.06, stdev=4020.19 00:41:47.665 lat (usec): min=23423, max=63351, avg=34644.32, stdev=4020.13 00:41:47.665 clat percentiles (usec): 00:41:47.665 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:41:47.665 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:41:47.665 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:41:47.665 | 99.00th=[43779], 99.50th=[44303], 99.90th=[63177], 99.95th=[63177], 00:41:47.665 | 99.99th=[63177] 00:41:47.665 bw ( KiB/s): min= 1408, max= 1923, per=4.17%, avg=1824.15, stdev=159.67, samples=20 00:41:47.665 iops : min= 352, max= 480, avg=456.00, stdev=39.89, samples=20 00:41:47.665 lat (msec) : 50=99.65%, 100=0.35% 00:41:47.665 cpu : usr=98.30%, sys=1.30%, ctx=13, majf=0, minf=9 00:41:47.665 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:47.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.665 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.665 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.665 filename0: (groupid=0, jobs=1): err= 0: pid=961366: Mon Nov 18 08:15:39 2024 00:41:47.665 read: IOPS=457, BW=1828KiB/s (1872kB/s)(17.9MiB/10046msec) 00:41:47.665 slat (usec): min=14, max=172, avg=62.72, stdev=26.12 00:41:47.665 clat (usec): min=22274, max=63021, avg=34456.94, stdev=4170.88 00:41:47.665 lat (usec): min=22302, max=63132, avg=34519.66, stdev=4164.12 00:41:47.665 clat percentiles (usec): 00:41:47.665 | 1.00th=[25297], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:41:47.665 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:41:47.665 | 70.00th=[33424], 80.00th=[33817], 90.00th=[42730], 95.00th=[43254], 00:41:47.665 | 99.00th=[43779], 99.50th=[44303], 99.90th=[62653], 99.95th=[62653], 00:41:47.665 | 99.99th=[63177] 00:41:47.665 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1830.40, stdev=176.68, samples=20 00:41:47.665 iops : min= 352, max= 480, avg=457.60, stdev=44.17, samples=20 00:41:47.665 lat (msec) : 50=99.65%, 100=0.35% 00:41:47.665 cpu : usr=96.81%, sys=1.97%, ctx=126, majf=0, minf=9 00:41:47.665 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:47.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.665 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.665 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.665 filename0: (groupid=0, jobs=1): err= 0: pid=961367: Mon Nov 18 08:15:39 2024 00:41:47.665 read: IOPS=457, BW=1830KiB/s (1874kB/s)(18.0MiB/10072msec) 00:41:47.665 slat (nsec): min=7997, max=94696, avg=20114.34, stdev=12973.11 00:41:47.665 clat (usec): min=15045, max=80876, avg=34799.39, stdev=4803.62 00:41:47.665 lat (usec): min=15054, max=80903, avg=34819.50, stdev=4800.31 00:41:47.665 clat percentiles (usec): 00:41:47.665 | 1.00th=[23200], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:41:47.665 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:41:47.665 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[43779], 00:41:47.665 | 99.00th=[43779], 99.50th=[44303], 99.90th=[81265], 99.95th=[81265], 00:41:47.665 | 99.99th=[81265] 00:41:47.665 bw ( KiB/s): min= 1408, max= 1920, per=4.20%, avg=1836.80, stdev=162.31, samples=20 00:41:47.665 iops : min= 352, max= 480, avg=459.20, stdev=40.58, samples=20 00:41:47.665 lat (msec) : 20=0.39%, 50=99.22%, 100=0.39% 00:41:47.665 cpu : usr=98.26%, sys=1.33%, ctx=26, majf=0, minf=9 00:41:47.665 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:47.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.665 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.665 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.665 filename0: (groupid=0, jobs=1): err= 0: pid=961368: Mon Nov 18 08:15:39 2024 00:41:47.665 read: IOPS=454, BW=1817KiB/s (1860kB/s)(17.8MiB/10040msec) 00:41:47.665 slat (usec): min=9, max=107, avg=43.25, stdev=19.77 00:41:47.665 clat (usec): min=31334, max=70258, avg=34852.83, stdev=4527.48 00:41:47.665 lat (usec): min=31396, max=70294, avg=34896.07, stdev=4522.90 00:41:47.665 clat percentiles (usec): 00:41:47.665 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:41:47.665 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:47.665 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:41:47.665 | 99.00th=[43779], 99.50th=[66323], 99.90th=[67634], 99.95th=[67634], 00:41:47.665 | 99.99th=[70779] 00:41:47.665 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1817.60, stdev=179.10, samples=20 00:41:47.665 iops : min= 352, max= 480, avg=454.40, stdev=44.78, samples=20 00:41:47.665 lat (msec) : 50=99.30%, 100=0.70% 00:41:47.665 cpu : usr=97.58%, sys=1.68%, ctx=115, majf=0, minf=9 00:41:47.665 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:47.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.665 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.665 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.665 filename0: (groupid=0, jobs=1): err= 0: pid=961369: Mon Nov 18 08:15:39 2024 00:41:47.665 read: IOPS=454, BW=1816KiB/s (1860kB/s)(17.8MiB/10043msec) 00:41:47.665 slat (usec): min=8, max=109, avg=40.73, stdev=18.97 00:41:47.665 clat (usec): min=31310, max=69189, avg=34843.96, stdev=4516.91 00:41:47.665 lat (usec): min=31380, max=69211, avg=34884.69, stdev=4513.96 00:41:47.665 clat percentiles (usec): 00:41:47.665 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:41:47.665 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:47.665 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:41:47.665 | 99.00th=[43779], 99.50th=[66847], 99.90th=[67634], 99.95th=[67634], 00:41:47.665 | 99.99th=[68682] 00:41:47.665 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1817.75, stdev=178.97, samples=20 00:41:47.665 iops : min= 352, max= 480, avg=454.40, stdev=44.78, samples=20 00:41:47.665 lat (msec) : 50=99.30%, 100=0.70% 00:41:47.665 cpu : usr=98.32%, sys=1.26%, ctx=11, majf=0, minf=9 00:41:47.665 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:47.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.665 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.665 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.665 filename0: (groupid=0, jobs=1): err= 0: pid=961370: Mon Nov 18 08:15:39 2024 00:41:47.665 read: IOPS=454, BW=1816KiB/s (1860kB/s)(17.8MiB/10043msec) 00:41:47.665 slat (usec): min=9, max=150, avg=58.41, stdev=22.80 00:41:47.665 clat (usec): min=31255, max=69272, avg=34713.97, stdev=4433.94 00:41:47.665 lat (usec): min=31325, max=69300, avg=34772.37, stdev=4437.80 00:41:47.665 clat percentiles (usec): 00:41:47.665 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:41:47.665 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:47.665 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42206], 95.00th=[43254], 00:41:47.665 | 99.00th=[43779], 99.50th=[66847], 99.90th=[67634], 99.95th=[67634], 00:41:47.665 | 99.99th=[69731] 00:41:47.665 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1817.60, stdev=179.10, samples=20 00:41:47.665 iops : min= 352, max= 480, avg=454.40, stdev=44.78, samples=20 00:41:47.665 lat (msec) : 50=99.30%, 100=0.70% 00:41:47.665 cpu : usr=98.45%, sys=1.12%, ctx=16, majf=0, minf=9 00:41:47.665 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:47.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.665 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.665 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.665 filename0: (groupid=0, jobs=1): err= 0: pid=961371: Mon Nov 18 08:15:39 2024 00:41:47.665 read: IOPS=456, BW=1825KiB/s (1869kB/s)(17.9MiB/10029msec) 00:41:47.665 slat (usec): min=8, max=101, avg=47.93, stdev=15.62 00:41:47.665 clat (usec): min=22879, max=63216, avg=34643.00, stdev=4002.95 00:41:47.665 lat (usec): min=22930, max=63239, avg=34690.93, stdev=4003.02 00:41:47.665 clat percentiles (usec): 00:41:47.665 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:41:47.665 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:47.665 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:41:47.665 | 99.00th=[43779], 99.50th=[44303], 99.90th=[63177], 99.95th=[63177], 00:41:47.665 | 99.99th=[63177] 00:41:47.665 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1823.80, stdev=160.04, samples=20 00:41:47.665 iops : min= 352, max= 480, avg=455.95, stdev=40.01, samples=20 00:41:47.665 lat (msec) : 50=99.65%, 100=0.35% 00:41:47.666 cpu : usr=98.40%, sys=1.18%, ctx=13, majf=0, minf=9 00:41:47.666 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:47.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.666 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.666 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.666 filename1: (groupid=0, jobs=1): err= 0: pid=961372: Mon Nov 18 08:15:39 2024 00:41:47.666 read: IOPS=456, BW=1825KiB/s (1869kB/s)(17.9MiB/10029msec) 00:41:47.666 slat (usec): min=13, max=129, avg=48.39, stdev=17.87 00:41:47.666 clat (usec): min=23373, max=63193, avg=34596.32, stdev=4015.74 00:41:47.666 lat (usec): min=23406, max=63240, avg=34644.71, stdev=4015.69 00:41:47.666 clat percentiles (usec): 00:41:47.666 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:41:47.666 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:41:47.666 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:41:47.666 | 99.00th=[43779], 99.50th=[44303], 99.90th=[62653], 99.95th=[63177], 00:41:47.666 | 99.99th=[63177] 00:41:47.666 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1824.00, stdev=160.17, samples=20 00:41:47.666 iops : min= 352, max= 480, avg=456.00, stdev=40.04, samples=20 00:41:47.666 lat (msec) : 50=99.65%, 100=0.35% 00:41:47.666 cpu : usr=98.09%, sys=1.44%, ctx=17, majf=0, minf=9 00:41:47.666 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:47.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.666 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.666 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.666 filename1: (groupid=0, jobs=1): err= 0: pid=961373: Mon Nov 18 08:15:39 2024 00:41:47.666 read: IOPS=457, BW=1829KiB/s (1873kB/s)(17.9MiB/10044msec) 00:41:47.666 slat (usec): min=8, max=118, avg=30.20, stdev=19.83 00:41:47.666 clat (usec): min=22504, max=62415, avg=34760.56, stdev=4086.74 00:41:47.666 lat (usec): min=22518, max=62437, avg=34790.77, stdev=4085.43 00:41:47.666 clat percentiles (usec): 00:41:47.666 | 1.00th=[25297], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:41:47.666 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:41:47.666 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:41:47.666 | 99.00th=[44303], 99.50th=[44303], 99.90th=[62129], 99.95th=[62653], 00:41:47.666 | 99.99th=[62653] 00:41:47.666 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1830.40, stdev=176.68, samples=20 00:41:47.666 iops : min= 352, max= 480, avg=457.60, stdev=44.17, samples=20 00:41:47.666 lat (msec) : 50=99.65%, 100=0.35% 00:41:47.666 cpu : usr=98.34%, sys=1.25%, ctx=15, majf=0, minf=9 00:41:47.666 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:47.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.666 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.666 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.666 filename1: (groupid=0, jobs=1): err= 0: pid=961374: Mon Nov 18 08:15:39 2024 00:41:47.666 read: IOPS=453, BW=1816KiB/s (1859kB/s)(17.8MiB/10045msec) 00:41:47.666 slat (nsec): min=8700, max=66017, avg=31893.32, stdev=9591.46 00:41:47.666 clat (usec): min=28417, max=72243, avg=34951.37, stdev=4550.83 00:41:47.666 lat (usec): min=28428, max=72263, avg=34983.26, stdev=4550.91 00:41:47.666 clat percentiles (usec): 00:41:47.666 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:41:47.666 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:41:47.666 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:41:47.666 | 99.00th=[43779], 99.50th=[66847], 99.90th=[69731], 99.95th=[69731], 00:41:47.666 | 99.99th=[71828] 00:41:47.666 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1817.60, stdev=183.93, samples=20 00:41:47.666 iops : min= 352, max= 512, avg=454.40, stdev=45.98, samples=20 00:41:47.666 lat (msec) : 50=99.30%, 100=0.70% 00:41:47.666 cpu : usr=97.91%, sys=1.34%, ctx=94, majf=0, minf=9 00:41:47.666 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:47.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.666 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.666 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.666 filename1: (groupid=0, jobs=1): err= 0: pid=961375: Mon Nov 18 08:15:39 2024 00:41:47.666 read: IOPS=456, BW=1826KiB/s (1869kB/s)(17.9MiB/10026msec) 00:41:47.666 slat (usec): min=8, max=121, avg=47.07, stdev=16.20 00:41:47.666 clat (usec): min=22961, max=63341, avg=34614.46, stdev=4022.24 00:41:47.666 lat (usec): min=23020, max=63377, avg=34661.54, stdev=4022.17 00:41:47.666 clat percentiles (usec): 00:41:47.666 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:41:47.666 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:47.666 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:41:47.666 | 99.00th=[43779], 99.50th=[44303], 99.90th=[63177], 99.95th=[63177], 00:41:47.666 | 99.99th=[63177] 00:41:47.666 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1824.00, stdev=160.17, samples=20 00:41:47.666 iops : min= 352, max= 480, avg=456.00, stdev=40.04, samples=20 00:41:47.666 lat (msec) : 50=99.65%, 100=0.35% 00:41:47.666 cpu : usr=98.31%, sys=1.28%, ctx=13, majf=0, minf=9 00:41:47.666 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:47.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.666 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.666 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.666 filename1: (groupid=0, jobs=1): err= 0: pid=961376: Mon Nov 18 08:15:39 2024 00:41:47.666 read: IOPS=456, BW=1825KiB/s (1869kB/s)(17.9MiB/10030msec) 00:41:47.666 slat (usec): min=13, max=129, avg=49.09, stdev=16.48 00:41:47.666 clat (usec): min=23390, max=63202, avg=34623.05, stdev=4008.98 00:41:47.666 lat (usec): min=23427, max=63226, avg=34672.13, stdev=4008.77 00:41:47.666 clat percentiles (usec): 00:41:47.666 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:41:47.666 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:47.666 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:41:47.666 | 99.00th=[43779], 99.50th=[44303], 99.90th=[63177], 99.95th=[63177], 00:41:47.666 | 99.99th=[63177] 00:41:47.666 bw ( KiB/s): min= 1408, max= 1923, per=4.17%, avg=1824.15, stdev=160.26, samples=20 00:41:47.666 iops : min= 352, max= 480, avg=456.00, stdev=40.04, samples=20 00:41:47.666 lat (msec) : 50=99.65%, 100=0.35% 00:41:47.666 cpu : usr=97.71%, sys=1.58%, ctx=129, majf=0, minf=9 00:41:47.666 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:47.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.666 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.666 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.666 filename1: (groupid=0, jobs=1): err= 0: pid=961377: Mon Nov 18 08:15:39 2024 00:41:47.666 read: IOPS=457, BW=1829KiB/s (1872kB/s)(17.9MiB/10045msec) 00:41:47.666 slat (usec): min=14, max=137, avg=49.47, stdev=17.72 00:41:47.666 clat (usec): min=22339, max=63023, avg=34581.59, stdev=4102.90 00:41:47.666 lat (usec): min=22366, max=63072, avg=34631.06, stdev=4102.11 00:41:47.666 clat percentiles (usec): 00:41:47.666 | 1.00th=[25035], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:41:47.666 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:47.666 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:41:47.666 | 99.00th=[43779], 99.50th=[44303], 99.90th=[62653], 99.95th=[63177], 00:41:47.666 | 99.99th=[63177] 00:41:47.666 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1830.40, stdev=176.68, samples=20 00:41:47.666 iops : min= 352, max= 480, avg=457.60, stdev=44.17, samples=20 00:41:47.666 lat (msec) : 50=99.65%, 100=0.35% 00:41:47.666 cpu : usr=98.27%, sys=1.31%, ctx=12, majf=0, minf=9 00:41:47.666 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:47.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.666 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.666 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.666 filename1: (groupid=0, jobs=1): err= 0: pid=961378: Mon Nov 18 08:15:39 2024 00:41:47.666 read: IOPS=457, BW=1829KiB/s (1872kB/s)(17.9MiB/10045msec) 00:41:47.666 slat (usec): min=10, max=100, avg=45.60, stdev=16.95 00:41:47.666 clat (usec): min=22235, max=62731, avg=34633.56, stdev=4095.29 00:41:47.666 lat (usec): min=22269, max=62758, avg=34679.16, stdev=4095.03 00:41:47.666 clat percentiles (usec): 00:41:47.666 | 1.00th=[25297], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:41:47.666 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:47.666 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:41:47.666 | 99.00th=[43779], 99.50th=[44303], 99.90th=[62653], 99.95th=[62653], 00:41:47.666 | 99.99th=[62653] 00:41:47.666 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1830.40, stdev=176.68, samples=20 00:41:47.666 iops : min= 352, max= 480, avg=457.60, stdev=44.17, samples=20 00:41:47.666 lat (msec) : 50=99.65%, 100=0.35% 00:41:47.666 cpu : usr=97.17%, sys=1.66%, ctx=148, majf=0, minf=9 00:41:47.666 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:47.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.666 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.666 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.666 filename1: (groupid=0, jobs=1): err= 0: pid=961379: Mon Nov 18 08:15:39 2024 00:41:47.666 read: IOPS=456, BW=1826KiB/s (1870kB/s)(17.9MiB/10024msec) 00:41:47.666 slat (usec): min=12, max=128, avg=45.03, stdev=14.76 00:41:47.666 clat (usec): min=23347, max=63317, avg=34617.18, stdev=4040.12 00:41:47.667 lat (usec): min=23387, max=63371, avg=34662.21, stdev=4039.90 00:41:47.667 clat percentiles (usec): 00:41:47.667 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:41:47.667 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:47.667 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:41:47.667 | 99.00th=[43779], 99.50th=[44303], 99.90th=[63177], 99.95th=[63177], 00:41:47.667 | 99.99th=[63177] 00:41:47.667 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1824.00, stdev=160.17, samples=20 00:41:47.667 iops : min= 352, max= 480, avg=456.00, stdev=40.04, samples=20 00:41:47.667 lat (msec) : 50=99.65%, 100=0.35% 00:41:47.667 cpu : usr=97.46%, sys=1.61%, ctx=144, majf=0, minf=9 00:41:47.667 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:47.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.667 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.667 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.667 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.667 filename2: (groupid=0, jobs=1): err= 0: pid=961380: Mon Nov 18 08:15:39 2024 00:41:47.667 read: IOPS=455, BW=1820KiB/s (1864kB/s)(17.9MiB/10057msec) 00:41:47.667 slat (usec): min=8, max=121, avg=34.26, stdev=23.03 00:41:47.667 clat (usec): min=13445, max=80956, avg=34852.37, stdev=4676.61 00:41:47.667 lat (usec): min=13493, max=80980, avg=34886.63, stdev=4668.22 00:41:47.667 clat percentiles (usec): 00:41:47.667 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:41:47.667 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:47.667 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[43779], 00:41:47.667 | 99.00th=[44303], 99.50th=[44303], 99.90th=[81265], 99.95th=[81265], 00:41:47.667 | 99.99th=[81265] 00:41:47.667 bw ( KiB/s): min= 1408, max= 2048, per=4.17%, avg=1824.15, stdev=165.44, samples=20 00:41:47.667 iops : min= 352, max= 512, avg=456.00, stdev=41.37, samples=20 00:41:47.667 lat (msec) : 20=0.13%, 50=99.39%, 100=0.48% 00:41:47.667 cpu : usr=97.39%, sys=1.74%, ctx=138, majf=0, minf=9 00:41:47.667 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:47.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.667 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.667 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.667 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.667 filename2: (groupid=0, jobs=1): err= 0: pid=961381: Mon Nov 18 08:15:39 2024 00:41:47.667 read: IOPS=457, BW=1829KiB/s (1872kB/s)(17.9MiB/10045msec) 00:41:47.667 slat (usec): min=11, max=148, avg=66.62, stdev=21.18 00:41:47.667 clat (usec): min=23322, max=63124, avg=34409.98, stdev=4050.60 00:41:47.667 lat (usec): min=23370, max=63169, avg=34476.61, stdev=4055.72 00:41:47.667 clat percentiles (usec): 00:41:47.667 | 1.00th=[24511], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:41:47.667 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:41:47.667 | 70.00th=[33424], 80.00th=[33817], 90.00th=[42206], 95.00th=[42730], 00:41:47.667 | 99.00th=[43779], 99.50th=[44303], 99.90th=[63177], 99.95th=[63177], 00:41:47.667 | 99.99th=[63177] 00:41:47.667 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1830.40, stdev=176.68, samples=20 00:41:47.667 iops : min= 352, max= 480, avg=457.60, stdev=44.17, samples=20 00:41:47.667 lat (msec) : 50=99.65%, 100=0.35% 00:41:47.667 cpu : usr=98.19%, sys=1.36%, ctx=12, majf=0, minf=9 00:41:47.667 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:47.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.667 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.667 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.667 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.667 filename2: (groupid=0, jobs=1): err= 0: pid=961382: Mon Nov 18 08:15:39 2024 00:41:47.667 read: IOPS=457, BW=1829KiB/s (1873kB/s)(17.9MiB/10044msec) 00:41:47.667 slat (usec): min=8, max=192, avg=34.72, stdev=16.14 00:41:47.667 clat (usec): min=22349, max=62560, avg=34727.72, stdev=4091.23 00:41:47.667 lat (usec): min=22383, max=62581, avg=34762.44, stdev=4089.82 00:41:47.667 clat percentiles (usec): 00:41:47.667 | 1.00th=[25297], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:41:47.667 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:41:47.667 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:41:47.667 | 99.00th=[43779], 99.50th=[44303], 99.90th=[62653], 99.95th=[62653], 00:41:47.667 | 99.99th=[62653] 00:41:47.667 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1830.40, stdev=176.68, samples=20 00:41:47.667 iops : min= 352, max= 480, avg=457.60, stdev=44.17, samples=20 00:41:47.667 lat (msec) : 50=99.65%, 100=0.35% 00:41:47.667 cpu : usr=97.19%, sys=1.80%, ctx=274, majf=0, minf=9 00:41:47.667 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:47.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.667 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.667 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.667 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.667 filename2: (groupid=0, jobs=1): err= 0: pid=961383: Mon Nov 18 08:15:39 2024 00:41:47.667 read: IOPS=454, BW=1817KiB/s (1860kB/s)(17.8MiB/10040msec) 00:41:47.667 slat (nsec): min=8128, max=89511, avg=30561.28, stdev=15107.14 00:41:47.667 clat (usec): min=28314, max=67375, avg=34975.90, stdev=4501.27 00:41:47.667 lat (usec): min=28326, max=67397, avg=35006.46, stdev=4498.44 00:41:47.667 clat percentiles (usec): 00:41:47.667 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:41:47.667 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:41:47.667 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[43779], 00:41:47.667 | 99.00th=[44303], 99.50th=[66847], 99.90th=[67634], 99.95th=[67634], 00:41:47.667 | 99.99th=[67634] 00:41:47.667 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1817.60, stdev=179.10, samples=20 00:41:47.667 iops : min= 352, max= 480, avg=454.40, stdev=44.78, samples=20 00:41:47.667 lat (msec) : 50=99.30%, 100=0.70% 00:41:47.667 cpu : usr=98.53%, sys=1.03%, ctx=26, majf=0, minf=9 00:41:47.667 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:41:47.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.667 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.667 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.667 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.667 filename2: (groupid=0, jobs=1): err= 0: pid=961384: Mon Nov 18 08:15:39 2024 00:41:47.667 read: IOPS=457, BW=1829KiB/s (1872kB/s)(17.9MiB/10045msec) 00:41:47.667 slat (usec): min=9, max=175, avg=46.47, stdev=18.83 00:41:47.667 clat (usec): min=22375, max=62930, avg=34594.01, stdev=4106.00 00:41:47.667 lat (usec): min=22384, max=62948, avg=34640.48, stdev=4105.14 00:41:47.667 clat percentiles (usec): 00:41:47.667 | 1.00th=[25035], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:41:47.667 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:47.667 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:41:47.667 | 99.00th=[43779], 99.50th=[44303], 99.90th=[62653], 99.95th=[62653], 00:41:47.667 | 99.99th=[63177] 00:41:47.667 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1830.40, stdev=176.68, samples=20 00:41:47.667 iops : min= 352, max= 480, avg=457.60, stdev=44.17, samples=20 00:41:47.667 lat (msec) : 50=99.65%, 100=0.35% 00:41:47.667 cpu : usr=98.25%, sys=1.30%, ctx=19, majf=0, minf=9 00:41:47.667 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:47.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.667 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.667 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.667 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.667 filename2: (groupid=0, jobs=1): err= 0: pid=961385: Mon Nov 18 08:15:39 2024 00:41:47.667 read: IOPS=482, BW=1930KiB/s (1977kB/s)(18.9MiB/10038msec) 00:41:47.667 slat (nsec): min=7847, max=98747, avg=19641.19, stdev=12901.82 00:41:47.667 clat (usec): min=10482, max=81197, avg=33005.43, stdev=6957.28 00:41:47.667 lat (usec): min=10493, max=81217, avg=33025.07, stdev=6958.03 00:41:47.667 clat percentiles (usec): 00:41:47.667 | 1.00th=[18744], 5.00th=[21365], 10.00th=[24511], 20.00th=[30802], 00:41:47.667 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:47.667 | 70.00th=[33424], 80.00th=[34341], 90.00th=[38011], 95.00th=[43254], 00:41:47.667 | 99.00th=[59507], 99.50th=[69731], 99.90th=[81265], 99.95th=[81265], 00:41:47.667 | 99.99th=[81265] 00:41:47.667 bw ( KiB/s): min= 1600, max= 2288, per=4.41%, avg=1931.20, stdev=149.02, samples=20 00:41:47.667 iops : min= 400, max= 572, avg=482.80, stdev=37.26, samples=20 00:41:47.667 lat (msec) : 20=2.79%, 50=94.90%, 100=2.31% 00:41:47.667 cpu : usr=98.17%, sys=1.43%, ctx=21, majf=0, minf=9 00:41:47.667 IO depths : 1=2.9%, 2=6.0%, 4=14.3%, 8=66.1%, 16=10.7%, 32=0.0%, >=64=0.0% 00:41:47.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.667 complete : 0=0.0%, 4=91.3%, 8=4.1%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.667 issued rwts: total=4844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.667 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.667 filename2: (groupid=0, jobs=1): err= 0: pid=961386: Mon Nov 18 08:15:39 2024 00:41:47.667 read: IOPS=454, BW=1817KiB/s (1860kB/s)(17.8MiB/10041msec) 00:41:47.667 slat (usec): min=8, max=112, avg=49.61, stdev=24.85 00:41:47.667 clat (usec): min=29685, max=70285, avg=34790.67, stdev=4545.75 00:41:47.667 lat (usec): min=29695, max=70316, avg=34840.28, stdev=4539.42 00:41:47.667 clat percentiles (usec): 00:41:47.667 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:41:47.667 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:41:47.667 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:41:47.667 | 99.00th=[43779], 99.50th=[66323], 99.90th=[67634], 99.95th=[67634], 00:41:47.667 | 99.99th=[70779] 00:41:47.667 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1817.60, stdev=179.10, samples=20 00:41:47.667 iops : min= 352, max= 480, avg=454.40, stdev=44.78, samples=20 00:41:47.667 lat (msec) : 50=99.30%, 100=0.70% 00:41:47.667 cpu : usr=97.87%, sys=1.42%, ctx=47, majf=0, minf=9 00:41:47.667 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:41:47.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.668 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.668 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.668 filename2: (groupid=0, jobs=1): err= 0: pid=961387: Mon Nov 18 08:15:39 2024 00:41:47.668 read: IOPS=456, BW=1825KiB/s (1869kB/s)(17.9MiB/10031msec) 00:41:47.668 slat (usec): min=13, max=134, avg=54.56, stdev=17.31 00:41:47.668 clat (usec): min=23297, max=63218, avg=34584.93, stdev=4033.58 00:41:47.668 lat (usec): min=23349, max=63265, avg=34639.50, stdev=4030.29 00:41:47.668 clat percentiles (usec): 00:41:47.668 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:41:47.668 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:41:47.668 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:41:47.668 | 99.00th=[43779], 99.50th=[44303], 99.90th=[63177], 99.95th=[63177], 00:41:47.668 | 99.99th=[63177] 00:41:47.668 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1824.00, stdev=160.17, samples=20 00:41:47.668 iops : min= 352, max= 480, avg=456.00, stdev=40.04, samples=20 00:41:47.668 lat (msec) : 50=99.65%, 100=0.35% 00:41:47.668 cpu : usr=98.28%, sys=1.30%, ctx=19, majf=0, minf=9 00:41:47.668 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:47.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.668 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.668 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:47.668 00:41:47.668 Run status group 0 (all jobs): 00:41:47.668 READ: bw=42.7MiB/s (44.8MB/s), 1816KiB/s-1930KiB/s (1859kB/s-1977kB/s), io=430MiB (451MB), run=10010-10072msec 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:47.668 bdev_null0 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:47.668 [2024-11-18 08:15:39.477019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:47.668 bdev_null1 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:47.668 { 00:41:47.668 "params": { 00:41:47.668 "name": "Nvme$subsystem", 00:41:47.668 "trtype": "$TEST_TRANSPORT", 00:41:47.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:47.668 "adrfam": "ipv4", 00:41:47.668 "trsvcid": "$NVMF_PORT", 00:41:47.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:47.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:47.668 "hdgst": ${hdgst:-false}, 00:41:47.668 "ddgst": ${ddgst:-false} 00:41:47.668 }, 00:41:47.668 "method": "bdev_nvme_attach_controller" 00:41:47.668 } 00:41:47.668 EOF 00:41:47.668 )") 00:41:47.668 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:47.669 { 00:41:47.669 "params": { 00:41:47.669 "name": "Nvme$subsystem", 00:41:47.669 "trtype": "$TEST_TRANSPORT", 00:41:47.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:47.669 "adrfam": "ipv4", 00:41:47.669 "trsvcid": "$NVMF_PORT", 00:41:47.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:47.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:47.669 "hdgst": ${hdgst:-false}, 00:41:47.669 "ddgst": ${ddgst:-false} 00:41:47.669 }, 00:41:47.669 "method": "bdev_nvme_attach_controller" 00:41:47.669 } 00:41:47.669 EOF 00:41:47.669 )") 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:47.669 "params": { 00:41:47.669 "name": "Nvme0", 00:41:47.669 "trtype": "tcp", 00:41:47.669 "traddr": "10.0.0.2", 00:41:47.669 "adrfam": "ipv4", 00:41:47.669 "trsvcid": "4420", 00:41:47.669 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:47.669 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:47.669 "hdgst": false, 00:41:47.669 "ddgst": false 00:41:47.669 }, 00:41:47.669 "method": "bdev_nvme_attach_controller" 00:41:47.669 },{ 00:41:47.669 "params": { 00:41:47.669 "name": "Nvme1", 00:41:47.669 "trtype": "tcp", 00:41:47.669 "traddr": "10.0.0.2", 00:41:47.669 "adrfam": "ipv4", 00:41:47.669 "trsvcid": "4420", 00:41:47.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:47.669 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:47.669 "hdgst": false, 00:41:47.669 "ddgst": false 00:41:47.669 }, 00:41:47.669 "method": "bdev_nvme_attach_controller" 00:41:47.669 }' 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:47.669 08:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:47.669 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:47.669 ... 00:41:47.669 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:47.669 ... 00:41:47.669 fio-3.35 00:41:47.669 Starting 4 threads 00:41:52.931 00:41:52.931 filename0: (groupid=0, jobs=1): err= 0: pid=962644: Mon Nov 18 08:15:45 2024 00:41:52.931 read: IOPS=1922, BW=15.0MiB/s (15.8MB/s)(75.1MiB/5003msec) 00:41:52.931 slat (nsec): min=3917, max=66109, avg=15027.40, stdev=8561.42 00:41:52.931 clat (usec): min=768, max=7540, avg=4110.71, stdev=501.02 00:41:52.931 lat (usec): min=776, max=7556, avg=4125.73, stdev=501.37 00:41:52.931 clat percentiles (usec): 00:41:52.931 | 1.00th=[ 2540], 5.00th=[ 3458], 10.00th=[ 3687], 20.00th=[ 3884], 00:41:52.931 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:41:52.931 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4752], 00:41:52.931 | 99.00th=[ 5866], 99.50th=[ 6456], 99.90th=[ 7242], 99.95th=[ 7439], 00:41:52.931 | 99.99th=[ 7570] 00:41:52.931 bw ( KiB/s): min=14944, max=15760, per=25.19%, avg=15377.60, stdev=257.49, samples=10 00:41:52.931 iops : min= 1868, max= 1970, avg=1922.20, stdev=32.19, samples=10 00:41:52.931 lat (usec) : 1000=0.05% 00:41:52.931 lat (msec) : 2=0.59%, 4=31.69%, 10=67.67% 00:41:52.931 cpu : usr=95.52%, sys=4.00%, ctx=11, majf=0, minf=23 00:41:52.931 IO depths : 1=0.3%, 2=12.5%, 4=59.2%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:52.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.931 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.931 issued rwts: total=9619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:52.931 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:52.931 filename0: (groupid=0, jobs=1): err= 0: pid=962645: Mon Nov 18 08:15:45 2024 00:41:52.931 read: IOPS=1878, BW=14.7MiB/s (15.4MB/s)(73.4MiB/5002msec) 00:41:52.931 slat (nsec): min=4176, max=65634, avg=18132.84, stdev=8528.79 00:41:52.931 clat (usec): min=791, max=7671, avg=4194.70, stdev=600.61 00:41:52.931 lat (usec): min=805, max=7693, avg=4212.83, stdev=600.55 00:41:52.931 clat percentiles (usec): 00:41:52.931 | 1.00th=[ 2540], 5.00th=[ 3490], 10.00th=[ 3752], 20.00th=[ 3949], 00:41:52.931 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4113], 60.00th=[ 4178], 00:41:52.931 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4686], 95.00th=[ 5276], 00:41:52.931 | 99.00th=[ 6587], 99.50th=[ 6849], 99.90th=[ 7242], 99.95th=[ 7504], 00:41:52.931 | 99.99th=[ 7701] 00:41:52.931 bw ( KiB/s): min=14480, max=15440, per=24.64%, avg=15041.78, stdev=316.83, samples=9 00:41:52.931 iops : min= 1810, max= 1930, avg=1880.22, stdev=39.60, samples=9 00:41:52.931 lat (usec) : 1000=0.10% 00:41:52.931 lat (msec) : 2=0.42%, 4=25.86%, 10=73.62% 00:41:52.931 cpu : usr=95.68%, sys=3.86%, ctx=9, majf=0, minf=53 00:41:52.931 IO depths : 1=0.1%, 2=16.5%, 4=56.3%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:52.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.931 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.931 issued rwts: total=9395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:52.931 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:52.931 filename1: (groupid=0, jobs=1): err= 0: pid=962646: Mon Nov 18 08:15:45 2024 00:41:52.931 read: IOPS=1927, BW=15.1MiB/s (15.8MB/s)(75.3MiB/5001msec) 00:41:52.931 slat (nsec): min=3946, max=68325, avg=18005.59, stdev=9719.88 00:41:52.931 clat (usec): min=965, max=7593, avg=4083.50, stdev=530.05 00:41:52.931 lat (usec): min=979, max=7601, avg=4101.51, stdev=530.90 00:41:52.931 clat percentiles (usec): 00:41:52.931 | 1.00th=[ 2376], 5.00th=[ 3359], 10.00th=[ 3621], 20.00th=[ 3851], 00:41:52.931 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:41:52.931 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4752], 00:41:52.931 | 99.00th=[ 6128], 99.50th=[ 6849], 99.90th=[ 7308], 99.95th=[ 7373], 00:41:52.931 | 99.99th=[ 7570] 00:41:52.931 bw ( KiB/s): min=14736, max=16240, per=25.25%, avg=15414.30, stdev=444.00, samples=10 00:41:52.931 iops : min= 1842, max= 2030, avg=1926.70, stdev=55.39, samples=10 00:41:52.931 lat (usec) : 1000=0.03% 00:41:52.931 lat (msec) : 2=0.57%, 4=34.13%, 10=65.27% 00:41:52.931 cpu : usr=93.68%, sys=4.48%, ctx=135, majf=0, minf=43 00:41:52.931 IO depths : 1=0.5%, 2=18.6%, 4=54.6%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:52.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.931 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.931 issued rwts: total=9639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:52.931 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:52.931 filename1: (groupid=0, jobs=1): err= 0: pid=962647: Mon Nov 18 08:15:45 2024 00:41:52.931 read: IOPS=1904, BW=14.9MiB/s (15.6MB/s)(74.4MiB/5002msec) 00:41:52.931 slat (nsec): min=4077, max=68392, avg=18594.46, stdev=9665.46 00:41:52.931 clat (usec): min=670, max=7407, avg=4127.44, stdev=580.53 00:41:52.931 lat (usec): min=683, max=7420, avg=4146.04, stdev=580.96 00:41:52.931 clat percentiles (usec): 00:41:52.931 | 1.00th=[ 2245], 5.00th=[ 3425], 10.00th=[ 3687], 20.00th=[ 3949], 00:41:52.932 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4146], 00:41:52.932 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 5014], 00:41:52.932 | 99.00th=[ 6587], 99.50th=[ 6849], 99.90th=[ 7111], 99.95th=[ 7242], 00:41:52.932 | 99.99th=[ 7439] 00:41:52.932 bw ( KiB/s): min=14976, max=15904, per=25.04%, avg=15290.67, stdev=335.24, samples=9 00:41:52.932 iops : min= 1872, max= 1988, avg=1911.33, stdev=41.90, samples=9 00:41:52.932 lat (usec) : 750=0.02%, 1000=0.16% 00:41:52.932 lat (msec) : 2=0.70%, 4=29.12%, 10=70.00% 00:41:52.932 cpu : usr=95.84%, sys=3.56%, ctx=64, majf=0, minf=51 00:41:52.932 IO depths : 1=0.2%, 2=19.5%, 4=54.0%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:52.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.932 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.932 issued rwts: total=9527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:52.932 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:52.932 00:41:52.932 Run status group 0 (all jobs): 00:41:52.932 READ: bw=59.6MiB/s (62.5MB/s), 14.7MiB/s-15.1MiB/s (15.4MB/s-15.8MB/s), io=298MiB (313MB), run=5001-5003msec 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.932 00:41:52.932 real 0m24.168s 00:41:52.932 user 4m33.305s 00:41:52.932 sys 0m6.266s 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:52.932 08:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:52.932 ************************************ 00:41:52.932 END TEST fio_dif_rand_params 00:41:52.932 ************************************ 00:41:52.932 08:15:45 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:52.932 08:15:45 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:52.932 08:15:45 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:52.932 08:15:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:52.932 ************************************ 00:41:52.932 START TEST fio_dif_digest 00:41:52.932 ************************************ 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:52.932 bdev_null0 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:52.932 [2024-11-18 08:15:45.853017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:52.932 { 00:41:52.932 "params": { 00:41:52.932 "name": "Nvme$subsystem", 00:41:52.932 "trtype": "$TEST_TRANSPORT", 00:41:52.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:52.932 "adrfam": "ipv4", 00:41:52.932 "trsvcid": "$NVMF_PORT", 00:41:52.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:52.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:52.932 "hdgst": ${hdgst:-false}, 00:41:52.932 "ddgst": ${ddgst:-false} 00:41:52.932 }, 00:41:52.932 "method": "bdev_nvme_attach_controller" 00:41:52.932 } 00:41:52.932 EOF 00:41:52.932 )") 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:41:52.932 08:15:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:52.932 "params": { 00:41:52.932 "name": "Nvme0", 00:41:52.932 "trtype": "tcp", 00:41:52.932 "traddr": "10.0.0.2", 00:41:52.932 "adrfam": "ipv4", 00:41:52.932 "trsvcid": "4420", 00:41:52.932 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:52.933 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:52.933 "hdgst": true, 00:41:52.933 "ddgst": true 00:41:52.933 }, 00:41:52.933 "method": "bdev_nvme_attach_controller" 00:41:52.933 }' 00:41:52.933 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:52.933 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:52.933 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:52.933 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:52.933 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:52.933 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:52.933 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:52.933 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:52.933 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:52.933 08:15:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:53.191 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:53.191 ... 00:41:53.191 fio-3.35 00:41:53.191 Starting 3 threads 00:42:05.391 00:42:05.391 filename0: (groupid=0, jobs=1): err= 0: pid=963512: Mon Nov 18 08:15:56 2024 00:42:05.391 read: IOPS=217, BW=27.1MiB/s (28.5MB/s)(273MiB/10048msec) 00:42:05.391 slat (nsec): min=4049, max=28899, avg=14514.63, stdev=1626.72 00:42:05.391 clat (usec): min=10434, max=52753, avg=13783.86, stdev=1471.81 00:42:05.391 lat (usec): min=10448, max=52767, avg=13798.38, stdev=1471.79 00:42:05.391 clat percentiles (usec): 00:42:05.391 | 1.00th=[11600], 5.00th=[12256], 10.00th=[12518], 20.00th=[12911], 00:42:05.391 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13698], 60.00th=[13960], 00:42:05.391 | 70.00th=[14222], 80.00th=[14484], 90.00th=[14877], 95.00th=[15270], 00:42:05.391 | 99.00th=[16188], 99.50th=[16581], 99.90th=[20841], 99.95th=[47973], 00:42:05.391 | 99.99th=[52691] 00:42:05.391 bw ( KiB/s): min=26880, max=28416, per=34.47%, avg=27878.40, stdev=379.71, samples=20 00:42:05.391 iops : min= 210, max= 222, avg=217.80, stdev= 2.97, samples=20 00:42:05.391 lat (msec) : 20=99.77%, 50=0.18%, 100=0.05% 00:42:05.391 cpu : usr=93.71%, sys=5.64%, ctx=148, majf=0, minf=138 00:42:05.391 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:05.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.391 issued rwts: total=2181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:05.391 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:05.391 filename0: (groupid=0, jobs=1): err= 0: pid=963513: Mon Nov 18 08:15:56 2024 00:42:05.391 read: IOPS=205, BW=25.7MiB/s (27.0MB/s)(259MiB/10047msec) 00:42:05.391 slat (nsec): min=4089, max=27476, avg=14576.63, stdev=1493.49 00:42:05.391 clat (usec): min=11213, max=50585, avg=14537.31, stdev=1419.77 00:42:05.391 lat (usec): min=11229, max=50601, avg=14551.89, stdev=1419.79 00:42:05.391 clat percentiles (usec): 00:42:05.391 | 1.00th=[12387], 5.00th=[13042], 10.00th=[13304], 20.00th=[13698], 00:42:05.391 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14615], 00:42:05.391 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15664], 95.00th=[16188], 00:42:05.391 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18482], 99.95th=[47449], 00:42:05.391 | 99.99th=[50594] 00:42:05.391 bw ( KiB/s): min=25600, max=26880, per=32.68%, avg=26434.60, stdev=358.60, samples=20 00:42:05.391 iops : min= 200, max= 210, avg=206.50, stdev= 2.82, samples=20 00:42:05.391 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:42:05.391 cpu : usr=93.98%, sys=5.53%, ctx=14, majf=0, minf=115 00:42:05.391 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:05.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.391 issued rwts: total=2068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:05.391 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:05.391 filename0: (groupid=0, jobs=1): err= 0: pid=963514: Mon Nov 18 08:15:56 2024 00:42:05.391 read: IOPS=209, BW=26.1MiB/s (27.4MB/s)(263MiB/10047msec) 00:42:05.391 slat (nsec): min=4535, max=31944, avg=15645.87, stdev=1904.00 00:42:05.391 clat (usec): min=11075, max=53953, avg=14312.45, stdev=1494.32 00:42:05.391 lat (usec): min=11091, max=53968, avg=14328.09, stdev=1494.33 00:42:05.391 clat percentiles (usec): 00:42:05.391 | 1.00th=[12125], 5.00th=[12911], 10.00th=[13173], 20.00th=[13566], 00:42:05.391 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14222], 60.00th=[14484], 00:42:05.391 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15401], 95.00th=[15926], 00:42:05.391 | 99.00th=[16712], 99.50th=[17171], 99.90th=[22676], 99.95th=[50070], 00:42:05.392 | 99.99th=[53740] 00:42:05.392 bw ( KiB/s): min=26368, max=27136, per=33.20%, avg=26854.40, stdev=233.45, samples=20 00:42:05.392 iops : min= 206, max= 212, avg=209.80, stdev= 1.82, samples=20 00:42:05.392 lat (msec) : 20=99.86%, 50=0.05%, 100=0.10% 00:42:05.392 cpu : usr=94.18%, sys=5.29%, ctx=18, majf=0, minf=140 00:42:05.392 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:05.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.392 issued rwts: total=2100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:05.392 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:05.392 00:42:05.392 Run status group 0 (all jobs): 00:42:05.392 READ: bw=79.0MiB/s (82.8MB/s), 25.7MiB/s-27.1MiB/s (27.0MB/s-28.5MB/s), io=794MiB (832MB), run=10047-10048msec 00:42:05.392 08:15:56 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:42:05.392 08:15:56 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:42:05.392 08:15:56 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:42:05.392 08:15:56 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:05.392 08:15:56 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:42:05.392 08:15:56 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:05.392 08:15:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.392 08:15:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:05.392 08:15:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.392 08:15:56 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:05.392 08:15:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.392 08:15:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:05.392 08:15:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.392 00:42:05.392 real 0m11.032s 00:42:05.392 user 0m29.338s 00:42:05.392 sys 0m1.905s 00:42:05.392 08:15:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:05.392 08:15:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:05.392 ************************************ 00:42:05.392 END TEST fio_dif_digest 00:42:05.392 ************************************ 00:42:05.392 08:15:56 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:42:05.392 08:15:56 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:42:05.392 08:15:56 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:05.392 08:15:56 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:42:05.392 08:15:56 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:05.392 08:15:56 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:42:05.392 08:15:56 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:05.392 08:15:56 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:05.392 rmmod nvme_tcp 00:42:05.392 rmmod nvme_fabrics 00:42:05.392 rmmod nvme_keyring 00:42:05.392 08:15:56 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:05.392 08:15:56 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:42:05.392 08:15:56 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:42:05.392 08:15:56 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 956859 ']' 00:42:05.392 08:15:56 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 956859 00:42:05.392 08:15:56 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 956859 ']' 00:42:05.392 08:15:56 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 956859 00:42:05.392 08:15:56 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:42:05.392 08:15:56 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:05.392 08:15:56 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 956859 00:42:05.392 08:15:56 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:05.392 08:15:56 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:05.392 08:15:56 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 956859' 00:42:05.392 killing process with pid 956859 00:42:05.392 08:15:56 nvmf_dif -- common/autotest_common.sh@973 -- # kill 956859 00:42:05.392 08:15:56 nvmf_dif -- common/autotest_common.sh@978 -- # wait 956859 00:42:05.392 08:15:57 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:42:05.392 08:15:57 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:05.392 Waiting for block devices as requested 00:42:05.392 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:05.392 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:05.650 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:05.650 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:05.650 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:05.908 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:05.908 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:05.908 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:05.908 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:06.167 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:06.167 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:06.167 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:06.167 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:06.426 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:06.426 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:06.426 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:06.426 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:06.685 08:15:59 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:06.685 08:15:59 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:06.685 08:15:59 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:42:06.685 08:15:59 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:42:06.685 08:15:59 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:06.685 08:15:59 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:42:06.685 08:15:59 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:06.685 08:15:59 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:06.685 08:15:59 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:06.685 08:15:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:06.685 08:15:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:08.593 08:16:01 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:08.594 00:42:08.594 real 1m6.738s 00:42:08.594 user 6m29.642s 00:42:08.594 sys 0m17.933s 00:42:08.594 08:16:01 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:08.594 08:16:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:08.594 ************************************ 00:42:08.594 END TEST nvmf_dif 00:42:08.594 ************************************ 00:42:08.594 08:16:01 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:08.594 08:16:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:08.594 08:16:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:08.594 08:16:01 -- common/autotest_common.sh@10 -- # set +x 00:42:08.852 ************************************ 00:42:08.852 START TEST nvmf_abort_qd_sizes 00:42:08.852 ************************************ 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:08.852 * Looking for test storage... 00:42:08.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:08.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:08.852 --rc genhtml_branch_coverage=1 00:42:08.852 --rc genhtml_function_coverage=1 00:42:08.852 --rc genhtml_legend=1 00:42:08.852 --rc geninfo_all_blocks=1 00:42:08.852 --rc geninfo_unexecuted_blocks=1 00:42:08.852 00:42:08.852 ' 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:08.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:08.852 --rc genhtml_branch_coverage=1 00:42:08.852 --rc genhtml_function_coverage=1 00:42:08.852 --rc genhtml_legend=1 00:42:08.852 --rc geninfo_all_blocks=1 00:42:08.852 --rc geninfo_unexecuted_blocks=1 00:42:08.852 00:42:08.852 ' 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:08.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:08.852 --rc genhtml_branch_coverage=1 00:42:08.852 --rc genhtml_function_coverage=1 00:42:08.852 --rc genhtml_legend=1 00:42:08.852 --rc geninfo_all_blocks=1 00:42:08.852 --rc geninfo_unexecuted_blocks=1 00:42:08.852 00:42:08.852 ' 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:08.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:08.852 --rc genhtml_branch_coverage=1 00:42:08.852 --rc genhtml_function_coverage=1 00:42:08.852 --rc genhtml_legend=1 00:42:08.852 --rc geninfo_all_blocks=1 00:42:08.852 --rc geninfo_unexecuted_blocks=1 00:42:08.852 00:42:08.852 ' 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:08.852 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:08.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:42:08.853 08:16:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:10.760 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:10.760 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:10.760 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:10.761 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:10.761 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:10.761 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:11.019 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:11.019 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:11.019 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:11.019 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:11.019 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:11.019 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:11.019 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:11.019 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:11.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:11.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:42:11.019 00:42:11.019 --- 10.0.0.2 ping statistics --- 00:42:11.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:11.019 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:42:11.019 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:11.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:11.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:42:11.019 00:42:11.019 --- 10.0.0.1 ping statistics --- 00:42:11.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:11.019 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:42:11.019 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:11.019 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:42:11.019 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:42:11.019 08:16:03 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:12.398 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:12.398 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:12.398 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:12.398 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:12.398 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:12.398 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:12.398 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:12.398 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:12.398 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:12.398 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:12.398 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:12.398 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:12.398 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:12.398 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:12.398 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:12.398 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:13.335 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:13.335 08:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:13.335 08:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:13.335 08:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:13.335 08:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:13.335 08:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:13.335 08:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:13.335 08:16:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:42:13.335 08:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:13.335 08:16:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:13.335 08:16:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:13.335 08:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=968318 00:42:13.335 08:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:42:13.335 08:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 968318 00:42:13.335 08:16:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 968318 ']' 00:42:13.335 08:16:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:13.335 08:16:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:13.335 08:16:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:13.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:13.335 08:16:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:13.335 08:16:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:13.335 [2024-11-18 08:16:06.401673] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:42:13.335 [2024-11-18 08:16:06.401761] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:13.593 [2024-11-18 08:16:06.475048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:13.593 [2024-11-18 08:16:06.522532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:13.593 [2024-11-18 08:16:06.522587] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:13.593 [2024-11-18 08:16:06.522615] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:13.593 [2024-11-18 08:16:06.522627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:13.593 [2024-11-18 08:16:06.522636] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:13.593 [2024-11-18 08:16:06.524095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:13.593 [2024-11-18 08:16:06.524159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:13.593 [2024-11-18 08:16:06.524227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:13.593 [2024-11-18 08:16:06.524230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:13.593 08:16:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:13.594 08:16:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:13.852 ************************************ 00:42:13.852 START TEST spdk_target_abort 00:42:13.852 ************************************ 00:42:13.852 08:16:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:42:13.852 08:16:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:42:13.852 08:16:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:42:13.852 08:16:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:13.852 08:16:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:17.130 spdk_targetn1 00:42:17.130 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.130 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:17.130 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.130 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:17.130 [2024-11-18 08:16:09.536522] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:17.130 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.130 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:42:17.130 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:17.131 [2024-11-18 08:16:09.584873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:17.131 08:16:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:20.412 Initializing NVMe Controllers 00:42:20.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:20.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:20.412 Initialization complete. Launching workers. 00:42:20.412 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11474, failed: 0 00:42:20.412 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1200, failed to submit 10274 00:42:20.412 success 679, unsuccessful 521, failed 0 00:42:20.412 08:16:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:20.412 08:16:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:23.691 Initializing NVMe Controllers 00:42:23.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:23.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:23.691 Initialization complete. Launching workers. 00:42:23.691 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8578, failed: 0 00:42:23.691 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1239, failed to submit 7339 00:42:23.691 success 334, unsuccessful 905, failed 0 00:42:23.691 08:16:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:23.691 08:16:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:26.970 Initializing NVMe Controllers 00:42:26.970 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:26.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:26.970 Initialization complete. Launching workers. 00:42:26.970 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31563, failed: 0 00:42:26.970 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2775, failed to submit 28788 00:42:26.970 success 496, unsuccessful 2279, failed 0 00:42:26.970 08:16:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:42:26.970 08:16:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.970 08:16:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:26.970 08:16:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.970 08:16:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:42:26.970 08:16:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.970 08:16:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:27.903 08:16:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.903 08:16:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 968318 00:42:27.903 08:16:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 968318 ']' 00:42:27.903 08:16:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 968318 00:42:27.903 08:16:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:42:27.903 08:16:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:27.903 08:16:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 968318 00:42:27.903 08:16:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:27.903 08:16:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:27.903 08:16:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 968318' 00:42:27.903 killing process with pid 968318 00:42:27.903 08:16:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 968318 00:42:27.903 08:16:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 968318 00:42:28.162 00:42:28.162 real 0m14.364s 00:42:28.162 user 0m54.824s 00:42:28.162 sys 0m2.357s 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:28.162 ************************************ 00:42:28.162 END TEST spdk_target_abort 00:42:28.162 ************************************ 00:42:28.162 08:16:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:28.162 08:16:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:28.162 08:16:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:28.162 08:16:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:28.162 ************************************ 00:42:28.162 START TEST kernel_target_abort 00:42:28.162 ************************************ 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:28.162 08:16:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:29.538 Waiting for block devices as requested 00:42:29.538 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:29.538 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:29.538 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:29.799 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:29.799 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:29.799 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:29.799 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:30.070 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:30.070 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:30.070 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:30.070 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:30.391 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:30.391 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:30.391 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:30.391 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:30.671 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:30.671 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:30.671 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:42:30.671 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:30.671 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:42:30.671 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:42:30.671 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:30.671 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:42:30.671 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:42:30.671 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:30.671 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:30.671 No valid GPT data, bailing 00:42:30.931 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:30.931 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:30.931 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:30.931 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:42:30.931 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:42:30.931 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:42:30.932 00:42:30.932 Discovery Log Number of Records 2, Generation counter 2 00:42:30.932 =====Discovery Log Entry 0====== 00:42:30.932 trtype: tcp 00:42:30.932 adrfam: ipv4 00:42:30.932 subtype: current discovery subsystem 00:42:30.932 treq: not specified, sq flow control disable supported 00:42:30.932 portid: 1 00:42:30.932 trsvcid: 4420 00:42:30.932 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:30.932 traddr: 10.0.0.1 00:42:30.932 eflags: none 00:42:30.932 sectype: none 00:42:30.932 =====Discovery Log Entry 1====== 00:42:30.932 trtype: tcp 00:42:30.932 adrfam: ipv4 00:42:30.932 subtype: nvme subsystem 00:42:30.932 treq: not specified, sq flow control disable supported 00:42:30.932 portid: 1 00:42:30.932 trsvcid: 4420 00:42:30.932 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:30.932 traddr: 10.0.0.1 00:42:30.932 eflags: none 00:42:30.932 sectype: none 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:30.932 08:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:34.227 Initializing NVMe Controllers 00:42:34.227 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:34.227 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:34.227 Initialization complete. Launching workers. 00:42:34.227 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55872, failed: 0 00:42:34.227 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 55872, failed to submit 0 00:42:34.227 success 0, unsuccessful 55872, failed 0 00:42:34.227 08:16:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:34.227 08:16:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:37.519 Initializing NVMe Controllers 00:42:37.519 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:37.519 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:37.519 Initialization complete. Launching workers. 00:42:37.519 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 100426, failed: 0 00:42:37.519 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25294, failed to submit 75132 00:42:37.519 success 0, unsuccessful 25294, failed 0 00:42:37.519 08:16:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:37.519 08:16:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:40.813 Initializing NVMe Controllers 00:42:40.813 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:40.813 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:40.813 Initialization complete. Launching workers. 00:42:40.813 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95507, failed: 0 00:42:40.813 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23894, failed to submit 71613 00:42:40.813 success 0, unsuccessful 23894, failed 0 00:42:40.813 08:16:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:40.813 08:16:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:40.813 08:16:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:42:40.813 08:16:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:40.813 08:16:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:40.813 08:16:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:40.813 08:16:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:40.813 08:16:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:42:40.813 08:16:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:42:40.813 08:16:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:41.381 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:41.381 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:41.381 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:41.381 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:41.381 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:41.381 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:41.381 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:41.381 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:41.381 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:41.381 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:41.640 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:41.640 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:41.640 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:41.640 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:41.640 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:41.640 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:42.581 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:42.581 00:42:42.581 real 0m14.430s 00:42:42.581 user 0m6.658s 00:42:42.581 sys 0m3.238s 00:42:42.581 08:16:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:42.581 08:16:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:42.581 ************************************ 00:42:42.581 END TEST kernel_target_abort 00:42:42.581 ************************************ 00:42:42.581 08:16:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:42.581 08:16:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:42.581 08:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:42.581 08:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:42.581 08:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:42.581 08:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:42.581 08:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:42.581 08:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:42.581 rmmod nvme_tcp 00:42:42.581 rmmod nvme_fabrics 00:42:42.581 rmmod nvme_keyring 00:42:42.581 08:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:42.581 08:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:42.581 08:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:42.581 08:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 968318 ']' 00:42:42.581 08:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 968318 00:42:42.581 08:16:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 968318 ']' 00:42:42.581 08:16:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 968318 00:42:42.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (968318) - No such process 00:42:42.581 08:16:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 968318 is not found' 00:42:42.581 Process with pid 968318 is not found 00:42:42.581 08:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:42:42.581 08:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:43.955 Waiting for block devices as requested 00:42:43.955 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:43.955 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:44.214 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:44.214 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:44.214 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:44.214 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:44.473 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:44.473 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:44.473 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:44.473 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:44.732 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:44.732 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:44.732 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:44.990 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:44.990 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:44.990 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:44.990 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:45.250 08:16:38 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:45.250 08:16:38 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:45.250 08:16:38 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:45.250 08:16:38 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:42:45.250 08:16:38 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:45.250 08:16:38 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:42:45.250 08:16:38 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:45.250 08:16:38 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:45.250 08:16:38 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:45.251 08:16:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:45.251 08:16:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:47.159 08:16:40 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:47.159 00:42:47.159 real 0m38.515s 00:42:47.159 user 1m3.758s 00:42:47.159 sys 0m9.114s 00:42:47.159 08:16:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:47.159 08:16:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:47.159 ************************************ 00:42:47.159 END TEST nvmf_abort_qd_sizes 00:42:47.159 ************************************ 00:42:47.159 08:16:40 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:47.159 08:16:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:47.159 08:16:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:47.159 08:16:40 -- common/autotest_common.sh@10 -- # set +x 00:42:47.418 ************************************ 00:42:47.418 START TEST keyring_file 00:42:47.418 ************************************ 00:42:47.418 08:16:40 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:47.418 * Looking for test storage... 00:42:47.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:47.418 08:16:40 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:47.418 08:16:40 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:42:47.418 08:16:40 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:47.418 08:16:40 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:47.418 08:16:40 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:47.418 08:16:40 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:47.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:47.418 --rc genhtml_branch_coverage=1 00:42:47.418 --rc genhtml_function_coverage=1 00:42:47.418 --rc genhtml_legend=1 00:42:47.418 --rc geninfo_all_blocks=1 00:42:47.418 --rc geninfo_unexecuted_blocks=1 00:42:47.418 00:42:47.418 ' 00:42:47.418 08:16:40 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:47.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:47.418 --rc genhtml_branch_coverage=1 00:42:47.418 --rc genhtml_function_coverage=1 00:42:47.418 --rc genhtml_legend=1 00:42:47.418 --rc geninfo_all_blocks=1 00:42:47.418 --rc geninfo_unexecuted_blocks=1 00:42:47.418 00:42:47.418 ' 00:42:47.418 08:16:40 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:47.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:47.418 --rc genhtml_branch_coverage=1 00:42:47.418 --rc genhtml_function_coverage=1 00:42:47.418 --rc genhtml_legend=1 00:42:47.418 --rc geninfo_all_blocks=1 00:42:47.418 --rc geninfo_unexecuted_blocks=1 00:42:47.418 00:42:47.418 ' 00:42:47.418 08:16:40 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:47.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:47.418 --rc genhtml_branch_coverage=1 00:42:47.418 --rc genhtml_function_coverage=1 00:42:47.418 --rc genhtml_legend=1 00:42:47.418 --rc geninfo_all_blocks=1 00:42:47.418 --rc geninfo_unexecuted_blocks=1 00:42:47.418 00:42:47.418 ' 00:42:47.418 08:16:40 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:47.418 08:16:40 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:47.418 08:16:40 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:47.418 08:16:40 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:47.418 08:16:40 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:47.418 08:16:40 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:47.418 08:16:40 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:47.418 08:16:40 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:47.418 08:16:40 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:47.418 08:16:40 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:47.418 08:16:40 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:47.418 08:16:40 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:47.418 08:16:40 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:47.418 08:16:40 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:47.418 08:16:40 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:47.418 08:16:40 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:47.418 08:16:40 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:47.418 08:16:40 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:47.418 08:16:40 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:47.418 08:16:40 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:47.418 08:16:40 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:47.419 08:16:40 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:47.419 08:16:40 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:47.419 08:16:40 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:47.419 08:16:40 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:47.419 08:16:40 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:47.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:47.419 08:16:40 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:47.419 08:16:40 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:47.419 08:16:40 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:47.419 08:16:40 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:47.419 08:16:40 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:47.419 08:16:40 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:47.419 08:16:40 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:47.419 08:16:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:47.419 08:16:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:47.419 08:16:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:47.419 08:16:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:47.419 08:16:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:47.419 08:16:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.MnImIE4Wf4 00:42:47.419 08:16:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:47.419 08:16:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.MnImIE4Wf4 00:42:47.419 08:16:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.MnImIE4Wf4 00:42:47.419 08:16:40 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.MnImIE4Wf4 00:42:47.419 08:16:40 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:47.419 08:16:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:47.419 08:16:40 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:47.419 08:16:40 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:47.419 08:16:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:47.419 08:16:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:47.419 08:16:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.7LoEhIc1Xu 00:42:47.419 08:16:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:47.419 08:16:40 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:47.678 08:16:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.7LoEhIc1Xu 00:42:47.678 08:16:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.7LoEhIc1Xu 00:42:47.678 08:16:40 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.7LoEhIc1Xu 00:42:47.678 08:16:40 keyring_file -- keyring/file.sh@30 -- # tgtpid=974081 00:42:47.678 08:16:40 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:47.678 08:16:40 keyring_file -- keyring/file.sh@32 -- # waitforlisten 974081 00:42:47.678 08:16:40 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 974081 ']' 00:42:47.678 08:16:40 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:47.678 08:16:40 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:47.678 08:16:40 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:47.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:47.678 08:16:40 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:47.678 08:16:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:47.678 [2024-11-18 08:16:40.578684] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:42:47.678 [2024-11-18 08:16:40.578783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid974081 ] 00:42:47.678 [2024-11-18 08:16:40.648708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:47.678 [2024-11-18 08:16:40.697303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:47.936 08:16:40 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:47.936 08:16:40 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:47.936 08:16:40 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:47.936 08:16:40 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.936 08:16:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:47.936 [2024-11-18 08:16:40.969763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:47.936 null0 00:42:47.936 [2024-11-18 08:16:41.001849] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:47.936 [2024-11-18 08:16:41.002375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:47.936 08:16:41 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.936 08:16:41 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:47.936 08:16:41 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:47.936 08:16:41 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:47.936 08:16:41 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:47.936 08:16:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:47.936 08:16:41 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:47.936 08:16:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:47.936 08:16:41 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:47.936 08:16:41 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.936 08:16:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:48.195 [2024-11-18 08:16:41.029892] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:48.195 request: 00:42:48.195 { 00:42:48.195 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:48.195 "secure_channel": false, 00:42:48.195 "listen_address": { 00:42:48.195 "trtype": "tcp", 00:42:48.195 "traddr": "127.0.0.1", 00:42:48.195 "trsvcid": "4420" 00:42:48.195 }, 00:42:48.195 "method": "nvmf_subsystem_add_listener", 00:42:48.195 "req_id": 1 00:42:48.195 } 00:42:48.195 Got JSON-RPC error response 00:42:48.195 response: 00:42:48.195 { 00:42:48.195 "code": -32602, 00:42:48.195 "message": "Invalid parameters" 00:42:48.195 } 00:42:48.195 08:16:41 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:48.195 08:16:41 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:48.195 08:16:41 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:48.195 08:16:41 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:48.195 08:16:41 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:48.195 08:16:41 keyring_file -- keyring/file.sh@47 -- # bperfpid=974093 00:42:48.195 08:16:41 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:48.195 08:16:41 keyring_file -- keyring/file.sh@49 -- # waitforlisten 974093 /var/tmp/bperf.sock 00:42:48.195 08:16:41 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 974093 ']' 00:42:48.195 08:16:41 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:48.195 08:16:41 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:48.195 08:16:41 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:48.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:48.195 08:16:41 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:48.195 08:16:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:48.195 [2024-11-18 08:16:41.084218] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:42:48.195 [2024-11-18 08:16:41.084298] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid974093 ] 00:42:48.195 [2024-11-18 08:16:41.156196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:48.195 [2024-11-18 08:16:41.206282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:48.453 08:16:41 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:48.453 08:16:41 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:48.453 08:16:41 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MnImIE4Wf4 00:42:48.453 08:16:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MnImIE4Wf4 00:42:48.711 08:16:41 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.7LoEhIc1Xu 00:42:48.711 08:16:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.7LoEhIc1Xu 00:42:48.969 08:16:41 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:48.969 08:16:41 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:48.969 08:16:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:48.969 08:16:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:48.969 08:16:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:49.228 08:16:42 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.MnImIE4Wf4 == \/\t\m\p\/\t\m\p\.\M\n\I\m\I\E\4\W\f\4 ]] 00:42:49.228 08:16:42 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:49.228 08:16:42 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:49.228 08:16:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:49.228 08:16:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:49.228 08:16:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:49.486 08:16:42 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.7LoEhIc1Xu == \/\t\m\p\/\t\m\p\.\7\L\o\E\h\I\c\1\X\u ]] 00:42:49.486 08:16:42 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:42:49.486 08:16:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:49.486 08:16:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:49.486 08:16:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:49.486 08:16:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:49.486 08:16:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:49.744 08:16:42 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:49.744 08:16:42 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:42:49.744 08:16:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:49.744 08:16:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:49.744 08:16:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:49.744 08:16:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:49.744 08:16:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:50.003 08:16:42 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:42:50.003 08:16:42 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:50.003 08:16:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:50.261 [2024-11-18 08:16:43.207613] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:50.261 nvme0n1 00:42:50.261 08:16:43 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:42:50.261 08:16:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:50.261 08:16:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:50.261 08:16:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:50.261 08:16:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:50.261 08:16:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:50.519 08:16:43 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:42:50.519 08:16:43 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:42:50.519 08:16:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:50.519 08:16:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:50.519 08:16:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:50.519 08:16:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:50.519 08:16:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:51.085 08:16:43 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:42:51.085 08:16:43 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:51.085 Running I/O for 1 seconds... 00:42:52.022 10434.00 IOPS, 40.76 MiB/s 00:42:52.022 Latency(us) 00:42:52.022 [2024-11-18T07:16:45.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:52.023 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:52.023 nvme0n1 : 1.05 10082.48 39.38 0.00 0.00 12279.54 9175.04 49127.73 00:42:52.023 [2024-11-18T07:16:45.111Z] =================================================================================================================== 00:42:52.023 [2024-11-18T07:16:45.111Z] Total : 10082.48 39.38 0.00 0.00 12279.54 9175.04 49127.73 00:42:52.023 { 00:42:52.023 "results": [ 00:42:52.023 { 00:42:52.023 "job": "nvme0n1", 00:42:52.023 "core_mask": "0x2", 00:42:52.023 "workload": "randrw", 00:42:52.023 "percentage": 50, 00:42:52.023 "status": "finished", 00:42:52.023 "queue_depth": 128, 00:42:52.023 "io_size": 4096, 00:42:52.023 "runtime": 1.047659, 00:42:52.023 "iops": 10082.479127273282, 00:42:52.023 "mibps": 39.38468409091126, 00:42:52.023 "io_failed": 0, 00:42:52.023 "io_timeout": 0, 00:42:52.023 "avg_latency_us": 12279.542048169536, 00:42:52.023 "min_latency_us": 9175.04, 00:42:52.023 "max_latency_us": 49127.72740740741 00:42:52.023 } 00:42:52.023 ], 00:42:52.023 "core_count": 1 00:42:52.023 } 00:42:52.023 08:16:45 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:52.023 08:16:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:52.281 08:16:45 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:42:52.281 08:16:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:52.281 08:16:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:52.281 08:16:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:52.281 08:16:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:52.281 08:16:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:52.539 08:16:45 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:52.539 08:16:45 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:42:52.539 08:16:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:52.539 08:16:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:52.539 08:16:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:52.539 08:16:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:52.539 08:16:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:52.797 08:16:45 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:52.797 08:16:45 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:52.797 08:16:45 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:52.797 08:16:45 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:52.797 08:16:45 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:52.797 08:16:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:52.797 08:16:45 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:53.056 08:16:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:53.056 08:16:45 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:53.056 08:16:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:53.056 [2024-11-18 08:16:46.140023] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:53.056 [2024-11-18 08:16:46.140611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4fce0 (107): Transport endpoint is not connected 00:42:53.056 [2024-11-18 08:16:46.141602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4fce0 (9): Bad file descriptor 00:42:53.056 [2024-11-18 08:16:46.142601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:53.056 [2024-11-18 08:16:46.142632] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:53.056 [2024-11-18 08:16:46.142648] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:53.056 [2024-11-18 08:16:46.142664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:53.314 request: 00:42:53.314 { 00:42:53.314 "name": "nvme0", 00:42:53.314 "trtype": "tcp", 00:42:53.314 "traddr": "127.0.0.1", 00:42:53.314 "adrfam": "ipv4", 00:42:53.314 "trsvcid": "4420", 00:42:53.314 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:53.314 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:53.314 "prchk_reftag": false, 00:42:53.314 "prchk_guard": false, 00:42:53.314 "hdgst": false, 00:42:53.314 "ddgst": false, 00:42:53.314 "psk": "key1", 00:42:53.314 "allow_unrecognized_csi": false, 00:42:53.314 "method": "bdev_nvme_attach_controller", 00:42:53.314 "req_id": 1 00:42:53.314 } 00:42:53.314 Got JSON-RPC error response 00:42:53.314 response: 00:42:53.314 { 00:42:53.314 "code": -5, 00:42:53.314 "message": "Input/output error" 00:42:53.314 } 00:42:53.314 08:16:46 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:53.314 08:16:46 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:53.314 08:16:46 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:53.314 08:16:46 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:53.314 08:16:46 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:53.314 08:16:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:53.314 08:16:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:53.314 08:16:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:53.315 08:16:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:53.315 08:16:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:53.573 08:16:46 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:53.573 08:16:46 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:53.573 08:16:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:53.573 08:16:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:53.573 08:16:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:53.573 08:16:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:53.573 08:16:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:53.830 08:16:46 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:53.830 08:16:46 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:53.830 08:16:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:54.088 08:16:47 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:42:54.088 08:16:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:54.346 08:16:47 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:42:54.346 08:16:47 keyring_file -- keyring/file.sh@78 -- # jq length 00:42:54.346 08:16:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:54.604 08:16:47 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:42:54.604 08:16:47 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.MnImIE4Wf4 00:42:54.604 08:16:47 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.MnImIE4Wf4 00:42:54.604 08:16:47 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:54.604 08:16:47 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.MnImIE4Wf4 00:42:54.604 08:16:47 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:54.604 08:16:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:54.604 08:16:47 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:54.604 08:16:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:54.604 08:16:47 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MnImIE4Wf4 00:42:54.604 08:16:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MnImIE4Wf4 00:42:54.862 [2024-11-18 08:16:47.786640] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.MnImIE4Wf4': 0100660 00:42:54.862 [2024-11-18 08:16:47.786681] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:54.862 request: 00:42:54.862 { 00:42:54.862 "name": "key0", 00:42:54.862 "path": "/tmp/tmp.MnImIE4Wf4", 00:42:54.862 "method": "keyring_file_add_key", 00:42:54.862 "req_id": 1 00:42:54.862 } 00:42:54.862 Got JSON-RPC error response 00:42:54.862 response: 00:42:54.862 { 00:42:54.862 "code": -1, 00:42:54.862 "message": "Operation not permitted" 00:42:54.862 } 00:42:54.862 08:16:47 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:54.862 08:16:47 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:54.862 08:16:47 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:54.862 08:16:47 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:54.862 08:16:47 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.MnImIE4Wf4 00:42:54.862 08:16:47 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MnImIE4Wf4 00:42:54.862 08:16:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MnImIE4Wf4 00:42:55.120 08:16:48 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.MnImIE4Wf4 00:42:55.120 08:16:48 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:42:55.120 08:16:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:55.120 08:16:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:55.120 08:16:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:55.120 08:16:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:55.120 08:16:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:55.378 08:16:48 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:42:55.378 08:16:48 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:55.378 08:16:48 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:55.378 08:16:48 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:55.378 08:16:48 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:55.378 08:16:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:55.378 08:16:48 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:55.378 08:16:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:55.378 08:16:48 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:55.378 08:16:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:55.636 [2024-11-18 08:16:48.608892] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.MnImIE4Wf4': No such file or directory 00:42:55.636 [2024-11-18 08:16:48.608928] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:55.636 [2024-11-18 08:16:48.608965] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:55.636 [2024-11-18 08:16:48.608979] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:42:55.636 [2024-11-18 08:16:48.608999] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:55.636 [2024-11-18 08:16:48.609010] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:55.636 request: 00:42:55.636 { 00:42:55.636 "name": "nvme0", 00:42:55.636 "trtype": "tcp", 00:42:55.636 "traddr": "127.0.0.1", 00:42:55.636 "adrfam": "ipv4", 00:42:55.636 "trsvcid": "4420", 00:42:55.636 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:55.636 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:55.636 "prchk_reftag": false, 00:42:55.636 "prchk_guard": false, 00:42:55.636 "hdgst": false, 00:42:55.636 "ddgst": false, 00:42:55.636 "psk": "key0", 00:42:55.636 "allow_unrecognized_csi": false, 00:42:55.636 "method": "bdev_nvme_attach_controller", 00:42:55.636 "req_id": 1 00:42:55.636 } 00:42:55.636 Got JSON-RPC error response 00:42:55.636 response: 00:42:55.636 { 00:42:55.636 "code": -19, 00:42:55.636 "message": "No such device" 00:42:55.636 } 00:42:55.636 08:16:48 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:55.636 08:16:48 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:55.636 08:16:48 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:55.636 08:16:48 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:55.636 08:16:48 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:42:55.636 08:16:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:55.894 08:16:48 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:55.894 08:16:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:55.894 08:16:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:55.894 08:16:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:55.894 08:16:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:55.894 08:16:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:55.894 08:16:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.i4Czwd1FZK 00:42:55.894 08:16:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:55.894 08:16:48 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:55.894 08:16:48 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:55.894 08:16:48 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:55.894 08:16:48 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:55.894 08:16:48 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:55.894 08:16:48 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:55.894 08:16:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.i4Czwd1FZK 00:42:55.894 08:16:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.i4Czwd1FZK 00:42:55.894 08:16:48 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.i4Czwd1FZK 00:42:55.894 08:16:48 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.i4Czwd1FZK 00:42:55.894 08:16:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.i4Czwd1FZK 00:42:56.152 08:16:49 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:56.153 08:16:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:56.718 nvme0n1 00:42:56.718 08:16:49 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:42:56.718 08:16:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:56.718 08:16:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:56.718 08:16:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:56.718 08:16:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:56.718 08:16:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:56.976 08:16:49 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:42:56.976 08:16:49 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:42:56.977 08:16:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:57.235 08:16:50 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:42:57.235 08:16:50 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:42:57.235 08:16:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:57.235 08:16:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:57.235 08:16:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:57.494 08:16:50 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:42:57.494 08:16:50 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:42:57.494 08:16:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:57.494 08:16:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:57.494 08:16:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:57.494 08:16:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:57.494 08:16:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:57.751 08:16:50 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:42:57.751 08:16:50 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:57.752 08:16:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:58.010 08:16:50 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:42:58.010 08:16:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:58.010 08:16:50 keyring_file -- keyring/file.sh@105 -- # jq length 00:42:58.266 08:16:51 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:42:58.266 08:16:51 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.i4Czwd1FZK 00:42:58.266 08:16:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.i4Czwd1FZK 00:42:58.524 08:16:51 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.7LoEhIc1Xu 00:42:58.524 08:16:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.7LoEhIc1Xu 00:42:58.782 08:16:51 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:58.782 08:16:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:59.041 nvme0n1 00:42:59.041 08:16:52 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:42:59.041 08:16:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:59.608 08:16:52 keyring_file -- keyring/file.sh@113 -- # config='{ 00:42:59.608 "subsystems": [ 00:42:59.608 { 00:42:59.608 "subsystem": "keyring", 00:42:59.608 "config": [ 00:42:59.608 { 00:42:59.608 "method": "keyring_file_add_key", 00:42:59.608 "params": { 00:42:59.608 "name": "key0", 00:42:59.608 "path": "/tmp/tmp.i4Czwd1FZK" 00:42:59.608 } 00:42:59.608 }, 00:42:59.608 { 00:42:59.608 "method": "keyring_file_add_key", 00:42:59.608 "params": { 00:42:59.608 "name": "key1", 00:42:59.608 "path": "/tmp/tmp.7LoEhIc1Xu" 00:42:59.608 } 00:42:59.608 } 00:42:59.608 ] 00:42:59.608 }, 00:42:59.608 { 00:42:59.608 "subsystem": "iobuf", 00:42:59.608 "config": [ 00:42:59.608 { 00:42:59.608 "method": "iobuf_set_options", 00:42:59.608 "params": { 00:42:59.608 "small_pool_count": 8192, 00:42:59.608 "large_pool_count": 1024, 00:42:59.608 "small_bufsize": 8192, 00:42:59.608 "large_bufsize": 135168, 00:42:59.608 "enable_numa": false 00:42:59.608 } 00:42:59.608 } 00:42:59.608 ] 00:42:59.608 }, 00:42:59.608 { 00:42:59.608 "subsystem": "sock", 00:42:59.608 "config": [ 00:42:59.608 { 00:42:59.608 "method": "sock_set_default_impl", 00:42:59.608 "params": { 00:42:59.608 "impl_name": "posix" 00:42:59.608 } 00:42:59.608 }, 00:42:59.608 { 00:42:59.608 "method": "sock_impl_set_options", 00:42:59.608 "params": { 00:42:59.608 "impl_name": "ssl", 00:42:59.608 "recv_buf_size": 4096, 00:42:59.608 "send_buf_size": 4096, 00:42:59.608 "enable_recv_pipe": true, 00:42:59.608 "enable_quickack": false, 00:42:59.608 "enable_placement_id": 0, 00:42:59.608 "enable_zerocopy_send_server": true, 00:42:59.608 "enable_zerocopy_send_client": false, 00:42:59.608 "zerocopy_threshold": 0, 00:42:59.608 "tls_version": 0, 00:42:59.608 "enable_ktls": false 00:42:59.608 } 00:42:59.608 }, 00:42:59.608 { 00:42:59.608 "method": "sock_impl_set_options", 00:42:59.608 "params": { 00:42:59.608 "impl_name": "posix", 00:42:59.608 "recv_buf_size": 2097152, 00:42:59.608 "send_buf_size": 2097152, 00:42:59.608 "enable_recv_pipe": true, 00:42:59.608 "enable_quickack": false, 00:42:59.608 "enable_placement_id": 0, 00:42:59.608 "enable_zerocopy_send_server": true, 00:42:59.608 "enable_zerocopy_send_client": false, 00:42:59.608 "zerocopy_threshold": 0, 00:42:59.608 "tls_version": 0, 00:42:59.608 "enable_ktls": false 00:42:59.609 } 00:42:59.609 } 00:42:59.609 ] 00:42:59.609 }, 00:42:59.609 { 00:42:59.609 "subsystem": "vmd", 00:42:59.609 "config": [] 00:42:59.609 }, 00:42:59.609 { 00:42:59.609 "subsystem": "accel", 00:42:59.609 "config": [ 00:42:59.609 { 00:42:59.609 "method": "accel_set_options", 00:42:59.609 "params": { 00:42:59.609 "small_cache_size": 128, 00:42:59.609 "large_cache_size": 16, 00:42:59.609 "task_count": 2048, 00:42:59.609 "sequence_count": 2048, 00:42:59.609 "buf_count": 2048 00:42:59.609 } 00:42:59.609 } 00:42:59.609 ] 00:42:59.609 }, 00:42:59.609 { 00:42:59.609 "subsystem": "bdev", 00:42:59.609 "config": [ 00:42:59.609 { 00:42:59.609 "method": "bdev_set_options", 00:42:59.609 "params": { 00:42:59.609 "bdev_io_pool_size": 65535, 00:42:59.609 "bdev_io_cache_size": 256, 00:42:59.609 "bdev_auto_examine": true, 00:42:59.609 "iobuf_small_cache_size": 128, 00:42:59.609 "iobuf_large_cache_size": 16 00:42:59.609 } 00:42:59.609 }, 00:42:59.609 { 00:42:59.609 "method": "bdev_raid_set_options", 00:42:59.609 "params": { 00:42:59.609 "process_window_size_kb": 1024, 00:42:59.609 "process_max_bandwidth_mb_sec": 0 00:42:59.609 } 00:42:59.609 }, 00:42:59.609 { 00:42:59.609 "method": "bdev_iscsi_set_options", 00:42:59.609 "params": { 00:42:59.609 "timeout_sec": 30 00:42:59.609 } 00:42:59.609 }, 00:42:59.609 { 00:42:59.609 "method": "bdev_nvme_set_options", 00:42:59.609 "params": { 00:42:59.609 "action_on_timeout": "none", 00:42:59.609 "timeout_us": 0, 00:42:59.609 "timeout_admin_us": 0, 00:42:59.609 "keep_alive_timeout_ms": 10000, 00:42:59.609 "arbitration_burst": 0, 00:42:59.609 "low_priority_weight": 0, 00:42:59.609 "medium_priority_weight": 0, 00:42:59.609 "high_priority_weight": 0, 00:42:59.609 "nvme_adminq_poll_period_us": 10000, 00:42:59.609 "nvme_ioq_poll_period_us": 0, 00:42:59.609 "io_queue_requests": 512, 00:42:59.609 "delay_cmd_submit": true, 00:42:59.609 "transport_retry_count": 4, 00:42:59.609 "bdev_retry_count": 3, 00:42:59.609 "transport_ack_timeout": 0, 00:42:59.609 "ctrlr_loss_timeout_sec": 0, 00:42:59.609 "reconnect_delay_sec": 0, 00:42:59.609 "fast_io_fail_timeout_sec": 0, 00:42:59.609 "disable_auto_failback": false, 00:42:59.609 "generate_uuids": false, 00:42:59.609 "transport_tos": 0, 00:42:59.609 "nvme_error_stat": false, 00:42:59.609 "rdma_srq_size": 0, 00:42:59.609 "io_path_stat": false, 00:42:59.609 "allow_accel_sequence": false, 00:42:59.609 "rdma_max_cq_size": 0, 00:42:59.609 "rdma_cm_event_timeout_ms": 0, 00:42:59.609 "dhchap_digests": [ 00:42:59.609 "sha256", 00:42:59.609 "sha384", 00:42:59.609 "sha512" 00:42:59.609 ], 00:42:59.609 "dhchap_dhgroups": [ 00:42:59.609 "null", 00:42:59.609 "ffdhe2048", 00:42:59.609 "ffdhe3072", 00:42:59.609 "ffdhe4096", 00:42:59.609 "ffdhe6144", 00:42:59.609 "ffdhe8192" 00:42:59.609 ] 00:42:59.609 } 00:42:59.609 }, 00:42:59.609 { 00:42:59.609 "method": "bdev_nvme_attach_controller", 00:42:59.609 "params": { 00:42:59.609 "name": "nvme0", 00:42:59.609 "trtype": "TCP", 00:42:59.609 "adrfam": "IPv4", 00:42:59.609 "traddr": "127.0.0.1", 00:42:59.609 "trsvcid": "4420", 00:42:59.609 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:59.609 "prchk_reftag": false, 00:42:59.609 "prchk_guard": false, 00:42:59.609 "ctrlr_loss_timeout_sec": 0, 00:42:59.609 "reconnect_delay_sec": 0, 00:42:59.609 "fast_io_fail_timeout_sec": 0, 00:42:59.609 "psk": "key0", 00:42:59.609 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:59.609 "hdgst": false, 00:42:59.609 "ddgst": false, 00:42:59.609 "multipath": "multipath" 00:42:59.609 } 00:42:59.609 }, 00:42:59.609 { 00:42:59.609 "method": "bdev_nvme_set_hotplug", 00:42:59.609 "params": { 00:42:59.609 "period_us": 100000, 00:42:59.609 "enable": false 00:42:59.609 } 00:42:59.609 }, 00:42:59.609 { 00:42:59.609 "method": "bdev_wait_for_examine" 00:42:59.609 } 00:42:59.609 ] 00:42:59.609 }, 00:42:59.609 { 00:42:59.609 "subsystem": "nbd", 00:42:59.609 "config": [] 00:42:59.609 } 00:42:59.609 ] 00:42:59.609 }' 00:42:59.609 08:16:52 keyring_file -- keyring/file.sh@115 -- # killprocess 974093 00:42:59.609 08:16:52 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 974093 ']' 00:42:59.609 08:16:52 keyring_file -- common/autotest_common.sh@958 -- # kill -0 974093 00:42:59.609 08:16:52 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:59.609 08:16:52 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:59.609 08:16:52 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 974093 00:42:59.609 08:16:52 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:59.609 08:16:52 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:59.609 08:16:52 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 974093' 00:42:59.609 killing process with pid 974093 00:42:59.609 08:16:52 keyring_file -- common/autotest_common.sh@973 -- # kill 974093 00:42:59.609 Received shutdown signal, test time was about 1.000000 seconds 00:42:59.609 00:42:59.609 Latency(us) 00:42:59.609 [2024-11-18T07:16:52.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:59.609 [2024-11-18T07:16:52.697Z] =================================================================================================================== 00:42:59.609 [2024-11-18T07:16:52.697Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:59.609 08:16:52 keyring_file -- common/autotest_common.sh@978 -- # wait 974093 00:42:59.609 08:16:52 keyring_file -- keyring/file.sh@118 -- # bperfpid=975572 00:42:59.609 08:16:52 keyring_file -- keyring/file.sh@120 -- # waitforlisten 975572 /var/tmp/bperf.sock 00:42:59.609 08:16:52 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 975572 ']' 00:42:59.609 08:16:52 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:59.609 08:16:52 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:59.609 08:16:52 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:59.609 08:16:52 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:42:59.609 "subsystems": [ 00:42:59.609 { 00:42:59.609 "subsystem": "keyring", 00:42:59.609 "config": [ 00:42:59.609 { 00:42:59.609 "method": "keyring_file_add_key", 00:42:59.609 "params": { 00:42:59.609 "name": "key0", 00:42:59.609 "path": "/tmp/tmp.i4Czwd1FZK" 00:42:59.609 } 00:42:59.609 }, 00:42:59.609 { 00:42:59.609 "method": "keyring_file_add_key", 00:42:59.609 "params": { 00:42:59.609 "name": "key1", 00:42:59.609 "path": "/tmp/tmp.7LoEhIc1Xu" 00:42:59.609 } 00:42:59.609 } 00:42:59.609 ] 00:42:59.609 }, 00:42:59.609 { 00:42:59.609 "subsystem": "iobuf", 00:42:59.609 "config": [ 00:42:59.609 { 00:42:59.609 "method": "iobuf_set_options", 00:42:59.609 "params": { 00:42:59.609 "small_pool_count": 8192, 00:42:59.609 "large_pool_count": 1024, 00:42:59.609 "small_bufsize": 8192, 00:42:59.609 "large_bufsize": 135168, 00:42:59.609 "enable_numa": false 00:42:59.609 } 00:42:59.609 } 00:42:59.609 ] 00:42:59.609 }, 00:42:59.609 { 00:42:59.609 "subsystem": "sock", 00:42:59.609 "config": [ 00:42:59.609 { 00:42:59.609 "method": "sock_set_default_impl", 00:42:59.609 "params": { 00:42:59.609 "impl_name": "posix" 00:42:59.609 } 00:42:59.609 }, 00:42:59.609 { 00:42:59.609 "method": "sock_impl_set_options", 00:42:59.609 "params": { 00:42:59.609 "impl_name": "ssl", 00:42:59.609 "recv_buf_size": 4096, 00:42:59.609 "send_buf_size": 4096, 00:42:59.609 "enable_recv_pipe": true, 00:42:59.609 "enable_quickack": false, 00:42:59.609 "enable_placement_id": 0, 00:42:59.609 "enable_zerocopy_send_server": true, 00:42:59.609 "enable_zerocopy_send_client": false, 00:42:59.609 "zerocopy_threshold": 0, 00:42:59.609 "tls_version": 0, 00:42:59.609 "enable_ktls": false 00:42:59.609 } 00:42:59.610 }, 00:42:59.610 { 00:42:59.610 "method": "sock_impl_set_options", 00:42:59.610 "params": { 00:42:59.610 "impl_name": "posix", 00:42:59.610 "recv_buf_size": 2097152, 00:42:59.610 "send_buf_size": 2097152, 00:42:59.610 "enable_recv_pipe": true, 00:42:59.610 "enable_quickack": false, 00:42:59.610 "enable_placement_id": 0, 00:42:59.610 "enable_zerocopy_send_server": true, 00:42:59.610 "enable_zerocopy_send_client": false, 00:42:59.610 "zerocopy_threshold": 0, 00:42:59.610 "tls_version": 0, 00:42:59.610 "enable_ktls": false 00:42:59.610 } 00:42:59.610 } 00:42:59.610 ] 00:42:59.610 }, 00:42:59.610 { 00:42:59.610 "subsystem": "vmd", 00:42:59.610 "config": [] 00:42:59.610 }, 00:42:59.610 { 00:42:59.610 "subsystem": "accel", 00:42:59.610 "config": [ 00:42:59.610 { 00:42:59.610 "method": "accel_set_options", 00:42:59.610 "params": { 00:42:59.610 "small_cache_size": 128, 00:42:59.610 "large_cache_size": 16, 00:42:59.610 "task_count": 2048, 00:42:59.610 "sequence_count": 2048, 00:42:59.610 "buf_count": 2048 00:42:59.610 } 00:42:59.610 } 00:42:59.610 ] 00:42:59.610 }, 00:42:59.610 { 00:42:59.610 "subsystem": "bdev", 00:42:59.610 "config": [ 00:42:59.610 { 00:42:59.610 "method": "bdev_set_options", 00:42:59.610 "params": { 00:42:59.610 "bdev_io_pool_size": 65535, 00:42:59.610 "bdev_io_cache_size": 256, 00:42:59.610 "bdev_auto_examine": true, 00:42:59.610 "iobuf_small_cache_size": 128, 00:42:59.610 "iobuf_large_cache_size": 16 00:42:59.610 } 00:42:59.610 }, 00:42:59.610 { 00:42:59.610 "method": "bdev_raid_set_options", 00:42:59.610 "params": { 00:42:59.610 "process_window_size_kb": 1024, 00:42:59.610 "process_max_bandwidth_mb_sec": 0 00:42:59.610 } 00:42:59.610 }, 00:42:59.610 { 00:42:59.610 "method": "bdev_iscsi_set_options", 00:42:59.610 "params": { 00:42:59.610 "timeout_sec": 30 00:42:59.610 } 00:42:59.610 }, 00:42:59.610 { 00:42:59.610 "method": "bdev_nvme_set_options", 00:42:59.610 "params": { 00:42:59.610 "action_on_timeout": "none", 00:42:59.610 "timeout_us": 0, 00:42:59.610 "timeout_admin_us": 0, 00:42:59.610 "keep_alive_timeout_ms": 10000, 00:42:59.610 "arbitration_burst": 0, 00:42:59.610 "low_priority_weight": 0, 00:42:59.610 "medium_priority_weight": 0, 00:42:59.610 "high_priority_weight": 0, 00:42:59.610 "nvme_adminq_poll_period_us": 10000, 00:42:59.610 "nvme_ioq_poll_period_us": 0, 00:42:59.610 "io_queue_requests": 512, 00:42:59.610 "delay_cmd_submit": true, 00:42:59.610 "transport_retry_count": 4, 00:42:59.610 "bdev_retry_count": 3, 00:42:59.610 "transport_ack_timeout": 0, 00:42:59.610 "ctrlr_loss_timeout_sec": 0, 00:42:59.610 "reconnect_delay_sec": 0, 00:42:59.610 "fast_io_fail_timeout_sec": 0, 00:42:59.610 "disable_auto_failback": false, 00:42:59.610 "generate_uuids": false, 00:42:59.610 "transport_tos": 0, 00:42:59.610 "nvme_error_stat": false, 00:42:59.610 "rdma_srq_size": 0, 00:42:59.610 "io_path_stat": false, 00:42:59.610 "allow_accel_sequence": false, 00:42:59.610 "rdma_max_cq_size": 0, 00:42:59.610 "rdma_cm_event_timeout_ms": 0, 00:42:59.610 "dhchap_digests": [ 00:42:59.610 "sha256", 00:42:59.610 "sha384", 00:42:59.610 "sha512" 00:42:59.610 ], 00:42:59.610 "dhchap_dhgroups": [ 00:42:59.610 "null", 00:42:59.610 "ffdhe2048", 00:42:59.610 "ffdhe3072", 00:42:59.610 "ffdhe4096", 00:42:59.610 "ffdhe6144", 00:42:59.610 "ffdhe8192" 00:42:59.610 ] 00:42:59.610 } 00:42:59.610 }, 00:42:59.610 { 00:42:59.610 "method": "bdev_nvme_attach_controller", 00:42:59.610 "params": { 00:42:59.610 "name": "nvme0", 00:42:59.610 "trtype": "TCP", 00:42:59.610 "adrfam": "IPv4", 00:42:59.610 "traddr": "127.0.0.1", 00:42:59.610 "trsvcid": "4420", 00:42:59.610 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:59.610 "prchk_reftag": false, 00:42:59.610 "prchk_guard": false, 00:42:59.610 "ctrlr_loss_timeout_sec": 0, 00:42:59.610 "reconnect_delay_sec": 0, 00:42:59.610 "fast_io_fail_timeout_sec": 0, 00:42:59.610 "psk": "key0", 00:42:59.610 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:59.610 "hdgst": false, 00:42:59.610 "ddgst": false, 00:42:59.610 "multipath": "multipath" 00:42:59.610 } 00:42:59.610 }, 00:42:59.610 { 00:42:59.610 "method": "bdev_nvme_set_hotplug", 00:42:59.610 "params": { 00:42:59.610 "period_us": 100000, 00:42:59.610 "enable": false 00:42:59.610 } 00:42:59.610 }, 00:42:59.610 { 00:42:59.610 "method": "bdev_wait_for_examine" 00:42:59.610 } 00:42:59.610 ] 00:42:59.610 }, 00:42:59.610 { 00:42:59.610 "subsystem": "nbd", 00:42:59.610 "config": [] 00:42:59.610 } 00:42:59.610 ] 00:42:59.610 }' 00:42:59.610 08:16:52 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:59.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:59.610 08:16:52 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:59.610 08:16:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:59.610 [2024-11-18 08:16:52.687223] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:42:59.610 [2024-11-18 08:16:52.687305] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid975572 ] 00:42:59.869 [2024-11-18 08:16:52.760560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:59.869 [2024-11-18 08:16:52.812351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:00.128 [2024-11-18 08:16:53.001646] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:00.128 08:16:53 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:00.128 08:16:53 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:00.128 08:16:53 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:43:00.128 08:16:53 keyring_file -- keyring/file.sh@121 -- # jq length 00:43:00.128 08:16:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:00.386 08:16:53 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:43:00.386 08:16:53 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:43:00.386 08:16:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:00.386 08:16:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:00.386 08:16:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:00.386 08:16:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:00.386 08:16:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:00.645 08:16:53 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:43:00.645 08:16:53 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:43:00.645 08:16:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:00.645 08:16:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:00.645 08:16:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:00.645 08:16:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:00.645 08:16:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:00.903 08:16:53 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:43:00.903 08:16:53 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:43:00.903 08:16:53 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:43:00.903 08:16:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:43:01.161 08:16:54 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:43:01.161 08:16:54 keyring_file -- keyring/file.sh@1 -- # cleanup 00:43:01.161 08:16:54 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.i4Czwd1FZK /tmp/tmp.7LoEhIc1Xu 00:43:01.161 08:16:54 keyring_file -- keyring/file.sh@20 -- # killprocess 975572 00:43:01.161 08:16:54 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 975572 ']' 00:43:01.161 08:16:54 keyring_file -- common/autotest_common.sh@958 -- # kill -0 975572 00:43:01.161 08:16:54 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:01.161 08:16:54 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:01.161 08:16:54 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 975572 00:43:01.161 08:16:54 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:01.161 08:16:54 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:01.161 08:16:54 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 975572' 00:43:01.161 killing process with pid 975572 00:43:01.161 08:16:54 keyring_file -- common/autotest_common.sh@973 -- # kill 975572 00:43:01.161 Received shutdown signal, test time was about 1.000000 seconds 00:43:01.161 00:43:01.161 Latency(us) 00:43:01.161 [2024-11-18T07:16:54.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:01.161 [2024-11-18T07:16:54.249Z] =================================================================================================================== 00:43:01.161 [2024-11-18T07:16:54.249Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:01.161 08:16:54 keyring_file -- common/autotest_common.sh@978 -- # wait 975572 00:43:01.419 08:16:54 keyring_file -- keyring/file.sh@21 -- # killprocess 974081 00:43:01.419 08:16:54 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 974081 ']' 00:43:01.419 08:16:54 keyring_file -- common/autotest_common.sh@958 -- # kill -0 974081 00:43:01.419 08:16:54 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:01.419 08:16:54 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:01.419 08:16:54 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 974081 00:43:01.419 08:16:54 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:01.419 08:16:54 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:01.419 08:16:54 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 974081' 00:43:01.419 killing process with pid 974081 00:43:01.419 08:16:54 keyring_file -- common/autotest_common.sh@973 -- # kill 974081 00:43:01.419 08:16:54 keyring_file -- common/autotest_common.sh@978 -- # wait 974081 00:43:01.984 00:43:01.984 real 0m14.611s 00:43:01.984 user 0m37.115s 00:43:01.984 sys 0m3.339s 00:43:01.984 08:16:54 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:01.984 08:16:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:01.984 ************************************ 00:43:01.984 END TEST keyring_file 00:43:01.984 ************************************ 00:43:01.985 08:16:54 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:43:01.985 08:16:54 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:01.985 08:16:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:01.985 08:16:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:01.985 08:16:54 -- common/autotest_common.sh@10 -- # set +x 00:43:01.985 ************************************ 00:43:01.985 START TEST keyring_linux 00:43:01.985 ************************************ 00:43:01.985 08:16:54 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:01.985 Joined session keyring: 125224513 00:43:01.985 * Looking for test storage... 00:43:01.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:01.985 08:16:54 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:01.985 08:16:54 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:43:01.985 08:16:54 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:01.985 08:16:55 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@345 -- # : 1 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:01.985 08:16:55 keyring_linux -- scripts/common.sh@368 -- # return 0 00:43:01.985 08:16:55 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:01.985 08:16:55 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:01.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:01.985 --rc genhtml_branch_coverage=1 00:43:01.985 --rc genhtml_function_coverage=1 00:43:01.985 --rc genhtml_legend=1 00:43:01.985 --rc geninfo_all_blocks=1 00:43:01.985 --rc geninfo_unexecuted_blocks=1 00:43:01.985 00:43:01.985 ' 00:43:01.985 08:16:55 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:01.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:01.985 --rc genhtml_branch_coverage=1 00:43:01.985 --rc genhtml_function_coverage=1 00:43:01.985 --rc genhtml_legend=1 00:43:01.985 --rc geninfo_all_blocks=1 00:43:01.985 --rc geninfo_unexecuted_blocks=1 00:43:01.985 00:43:01.985 ' 00:43:01.985 08:16:55 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:01.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:01.985 --rc genhtml_branch_coverage=1 00:43:01.985 --rc genhtml_function_coverage=1 00:43:01.985 --rc genhtml_legend=1 00:43:01.985 --rc geninfo_all_blocks=1 00:43:01.985 --rc geninfo_unexecuted_blocks=1 00:43:01.985 00:43:01.985 ' 00:43:01.985 08:16:55 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:01.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:01.985 --rc genhtml_branch_coverage=1 00:43:01.985 --rc genhtml_function_coverage=1 00:43:01.985 --rc genhtml_legend=1 00:43:01.985 --rc geninfo_all_blocks=1 00:43:01.985 --rc geninfo_unexecuted_blocks=1 00:43:01.985 00:43:01.985 ' 00:43:01.985 08:16:55 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:01.985 08:16:55 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:01.985 08:16:55 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:43:01.985 08:16:55 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:01.985 08:16:55 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:01.985 08:16:55 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:01.985 08:16:55 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:01.985 08:16:55 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:01.985 08:16:55 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:01.985 08:16:55 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:01.985 08:16:55 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:01.985 08:16:55 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:01.985 08:16:55 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:02.243 08:16:55 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:02.243 08:16:55 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:02.243 08:16:55 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:02.243 08:16:55 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:02.243 08:16:55 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:02.243 08:16:55 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:02.243 08:16:55 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:02.243 08:16:55 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:43:02.243 08:16:55 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:02.243 08:16:55 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:02.243 08:16:55 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:02.243 08:16:55 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:02.243 08:16:55 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:02.243 08:16:55 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:02.243 08:16:55 keyring_linux -- paths/export.sh@5 -- # export PATH 00:43:02.243 08:16:55 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:02.243 08:16:55 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:43:02.244 08:16:55 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:02.244 08:16:55 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:02.244 08:16:55 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:02.244 08:16:55 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:02.244 08:16:55 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:02.244 08:16:55 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:02.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:02.244 08:16:55 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:02.244 08:16:55 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:02.244 08:16:55 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:02.244 08:16:55 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:02.244 08:16:55 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:02.244 08:16:55 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:02.244 08:16:55 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:43:02.244 08:16:55 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:43:02.244 08:16:55 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:43:02.244 08:16:55 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:43:02.244 08:16:55 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:02.244 08:16:55 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:43:02.244 08:16:55 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:02.244 08:16:55 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:02.244 08:16:55 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:43:02.244 08:16:55 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:02.244 08:16:55 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:02.244 08:16:55 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:43:02.244 08:16:55 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:02.244 08:16:55 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:02.244 08:16:55 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:43:02.244 08:16:55 keyring_linux -- nvmf/common.sh@733 -- # python - 00:43:02.244 08:16:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:43:02.244 08:16:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:43:02.244 /tmp/:spdk-test:key0 00:43:02.244 08:16:55 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:43:02.244 08:16:55 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:02.244 08:16:55 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:43:02.244 08:16:55 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:02.244 08:16:55 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:02.244 08:16:55 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:43:02.244 08:16:55 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:02.244 08:16:55 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:02.244 08:16:55 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:43:02.244 08:16:55 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:02.244 08:16:55 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:43:02.244 08:16:55 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:43:02.244 08:16:55 keyring_linux -- nvmf/common.sh@733 -- # python - 00:43:02.244 08:16:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:43:02.244 08:16:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:43:02.244 /tmp/:spdk-test:key1 00:43:02.244 08:16:55 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=976039 00:43:02.244 08:16:55 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:02.244 08:16:55 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 976039 00:43:02.244 08:16:55 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 976039 ']' 00:43:02.244 08:16:55 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:02.244 08:16:55 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:02.244 08:16:55 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:02.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:02.244 08:16:55 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:02.244 08:16:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:02.244 [2024-11-18 08:16:55.216970] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:43:02.244 [2024-11-18 08:16:55.217076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid976039 ] 00:43:02.244 [2024-11-18 08:16:55.287628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:02.533 [2024-11-18 08:16:55.334046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:02.533 08:16:55 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:02.533 08:16:55 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:43:02.533 08:16:55 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:43:02.533 08:16:55 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.533 08:16:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:02.533 [2024-11-18 08:16:55.598037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:02.820 null0 00:43:02.820 [2024-11-18 08:16:55.630081] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:02.820 [2024-11-18 08:16:55.630601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:02.820 08:16:55 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.820 08:16:55 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:43:02.820 63556088 00:43:02.820 08:16:55 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:43:02.820 881822125 00:43:02.820 08:16:55 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=976046 00:43:02.820 08:16:55 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 976046 /var/tmp/bperf.sock 00:43:02.820 08:16:55 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:43:02.820 08:16:55 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 976046 ']' 00:43:02.820 08:16:55 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:02.820 08:16:55 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:02.820 08:16:55 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:02.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:02.820 08:16:55 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:02.820 08:16:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:02.820 [2024-11-18 08:16:55.700736] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:43:02.820 [2024-11-18 08:16:55.700835] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid976046 ] 00:43:02.820 [2024-11-18 08:16:55.768401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:02.820 [2024-11-18 08:16:55.814509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:03.078 08:16:55 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:03.078 08:16:55 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:43:03.078 08:16:55 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:43:03.078 08:16:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:43:03.336 08:16:56 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:43:03.336 08:16:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:03.594 08:16:56 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:03.594 08:16:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:03.852 [2024-11-18 08:16:56.793979] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:03.852 nvme0n1 00:43:03.852 08:16:56 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:43:03.852 08:16:56 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:43:03.852 08:16:56 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:03.852 08:16:56 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:03.852 08:16:56 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:03.852 08:16:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:04.110 08:16:57 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:43:04.110 08:16:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:04.110 08:16:57 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:43:04.110 08:16:57 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:43:04.110 08:16:57 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:04.110 08:16:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:04.110 08:16:57 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:43:04.368 08:16:57 keyring_linux -- keyring/linux.sh@25 -- # sn=63556088 00:43:04.368 08:16:57 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:43:04.368 08:16:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:04.368 08:16:57 keyring_linux -- keyring/linux.sh@26 -- # [[ 63556088 == \6\3\5\5\6\0\8\8 ]] 00:43:04.368 08:16:57 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 63556088 00:43:04.368 08:16:57 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:43:04.368 08:16:57 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:04.625 Running I/O for 1 seconds... 00:43:05.557 11284.00 IOPS, 44.08 MiB/s 00:43:05.557 Latency(us) 00:43:05.557 [2024-11-18T07:16:58.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:05.557 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:43:05.557 nvme0n1 : 1.01 11294.57 44.12 0.00 0.00 11267.17 8883.77 20291.89 00:43:05.557 [2024-11-18T07:16:58.645Z] =================================================================================================================== 00:43:05.557 [2024-11-18T07:16:58.645Z] Total : 11294.57 44.12 0.00 0.00 11267.17 8883.77 20291.89 00:43:05.557 { 00:43:05.557 "results": [ 00:43:05.557 { 00:43:05.557 "job": "nvme0n1", 00:43:05.557 "core_mask": "0x2", 00:43:05.557 "workload": "randread", 00:43:05.557 "status": "finished", 00:43:05.557 "queue_depth": 128, 00:43:05.557 "io_size": 4096, 00:43:05.557 "runtime": 1.010486, 00:43:05.557 "iops": 11294.565189423703, 00:43:05.557 "mibps": 44.11939527118634, 00:43:05.557 "io_failed": 0, 00:43:05.557 "io_timeout": 0, 00:43:05.557 "avg_latency_us": 11267.166065467904, 00:43:05.557 "min_latency_us": 8883.76888888889, 00:43:05.557 "max_latency_us": 20291.88740740741 00:43:05.557 } 00:43:05.557 ], 00:43:05.557 "core_count": 1 00:43:05.557 } 00:43:05.557 08:16:58 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:05.558 08:16:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:05.815 08:16:58 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:43:05.815 08:16:58 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:43:05.815 08:16:58 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:05.815 08:16:58 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:05.815 08:16:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:05.815 08:16:58 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:06.072 08:16:59 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:43:06.072 08:16:59 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:06.072 08:16:59 keyring_linux -- keyring/linux.sh@23 -- # return 00:43:06.072 08:16:59 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:06.072 08:16:59 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:43:06.072 08:16:59 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:06.072 08:16:59 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:06.072 08:16:59 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:06.072 08:16:59 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:06.072 08:16:59 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:06.072 08:16:59 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:06.072 08:16:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:06.332 [2024-11-18 08:16:59.366065] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:06.332 [2024-11-18 08:16:59.366687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9fa90 (107): Transport endpoint is not connected 00:43:06.332 [2024-11-18 08:16:59.367677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9fa90 (9): Bad file descriptor 00:43:06.332 [2024-11-18 08:16:59.368677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:43:06.332 [2024-11-18 08:16:59.368703] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:06.332 [2024-11-18 08:16:59.368717] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:06.332 [2024-11-18 08:16:59.368732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:43:06.332 request: 00:43:06.332 { 00:43:06.332 "name": "nvme0", 00:43:06.332 "trtype": "tcp", 00:43:06.332 "traddr": "127.0.0.1", 00:43:06.332 "adrfam": "ipv4", 00:43:06.332 "trsvcid": "4420", 00:43:06.332 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:06.332 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:06.332 "prchk_reftag": false, 00:43:06.332 "prchk_guard": false, 00:43:06.332 "hdgst": false, 00:43:06.332 "ddgst": false, 00:43:06.332 "psk": ":spdk-test:key1", 00:43:06.332 "allow_unrecognized_csi": false, 00:43:06.332 "method": "bdev_nvme_attach_controller", 00:43:06.332 "req_id": 1 00:43:06.332 } 00:43:06.332 Got JSON-RPC error response 00:43:06.332 response: 00:43:06.332 { 00:43:06.332 "code": -5, 00:43:06.332 "message": "Input/output error" 00:43:06.332 } 00:43:06.332 08:16:59 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:43:06.332 08:16:59 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:06.332 08:16:59 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:06.332 08:16:59 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:06.332 08:16:59 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:43:06.332 08:16:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:06.332 08:16:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:43:06.332 08:16:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:43:06.332 08:16:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:43:06.332 08:16:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:06.332 08:16:59 keyring_linux -- keyring/linux.sh@33 -- # sn=63556088 00:43:06.332 08:16:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 63556088 00:43:06.332 1 links removed 00:43:06.332 08:16:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:06.332 08:16:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:43:06.332 08:16:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:43:06.332 08:16:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:43:06.332 08:16:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:43:06.332 08:16:59 keyring_linux -- keyring/linux.sh@33 -- # sn=881822125 00:43:06.332 08:16:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 881822125 00:43:06.332 1 links removed 00:43:06.332 08:16:59 keyring_linux -- keyring/linux.sh@41 -- # killprocess 976046 00:43:06.332 08:16:59 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 976046 ']' 00:43:06.332 08:16:59 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 976046 00:43:06.332 08:16:59 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:43:06.332 08:16:59 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:06.332 08:16:59 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 976046 00:43:06.589 08:16:59 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:06.589 08:16:59 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:06.589 08:16:59 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 976046' 00:43:06.589 killing process with pid 976046 00:43:06.589 08:16:59 keyring_linux -- common/autotest_common.sh@973 -- # kill 976046 00:43:06.589 Received shutdown signal, test time was about 1.000000 seconds 00:43:06.589 00:43:06.589 Latency(us) 00:43:06.589 [2024-11-18T07:16:59.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:06.589 [2024-11-18T07:16:59.677Z] =================================================================================================================== 00:43:06.589 [2024-11-18T07:16:59.677Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:06.589 08:16:59 keyring_linux -- common/autotest_common.sh@978 -- # wait 976046 00:43:06.589 08:16:59 keyring_linux -- keyring/linux.sh@42 -- # killprocess 976039 00:43:06.589 08:16:59 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 976039 ']' 00:43:06.589 08:16:59 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 976039 00:43:06.589 08:16:59 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:43:06.589 08:16:59 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:06.589 08:16:59 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 976039 00:43:06.589 08:16:59 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:06.589 08:16:59 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:06.589 08:16:59 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 976039' 00:43:06.589 killing process with pid 976039 00:43:06.589 08:16:59 keyring_linux -- common/autotest_common.sh@973 -- # kill 976039 00:43:06.589 08:16:59 keyring_linux -- common/autotest_common.sh@978 -- # wait 976039 00:43:07.154 00:43:07.154 real 0m5.135s 00:43:07.154 user 0m10.261s 00:43:07.154 sys 0m1.583s 00:43:07.154 08:17:00 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:07.154 08:17:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:07.154 ************************************ 00:43:07.154 END TEST keyring_linux 00:43:07.154 ************************************ 00:43:07.154 08:17:00 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:43:07.154 08:17:00 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:43:07.154 08:17:00 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:43:07.154 08:17:00 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:43:07.154 08:17:00 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:43:07.154 08:17:00 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:43:07.154 08:17:00 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:43:07.154 08:17:00 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:43:07.154 08:17:00 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:43:07.154 08:17:00 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:43:07.155 08:17:00 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:43:07.155 08:17:00 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:43:07.155 08:17:00 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:43:07.155 08:17:00 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:43:07.155 08:17:00 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:43:07.155 08:17:00 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:43:07.155 08:17:00 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:43:07.155 08:17:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:07.155 08:17:00 -- common/autotest_common.sh@10 -- # set +x 00:43:07.155 08:17:00 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:43:07.155 08:17:00 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:43:07.155 08:17:00 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:43:07.155 08:17:00 -- common/autotest_common.sh@10 -- # set +x 00:43:09.058 INFO: APP EXITING 00:43:09.058 INFO: killing all VMs 00:43:09.058 INFO: killing vhost app 00:43:09.058 INFO: EXIT DONE 00:43:10.433 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:43:10.433 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:43:10.433 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:43:10.433 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:43:10.433 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:43:10.433 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:43:10.433 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:43:10.433 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:43:10.433 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:43:10.433 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:43:10.433 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:43:10.433 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:43:10.433 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:43:10.433 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:43:10.433 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:43:10.433 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:43:10.433 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:43:11.811 Cleaning 00:43:11.811 Removing: /var/run/dpdk/spdk0/config 00:43:11.811 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:11.811 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:11.811 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:11.811 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:11.811 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:43:11.811 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:43:11.811 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:43:11.811 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:43:11.811 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:11.811 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:11.811 Removing: /var/run/dpdk/spdk1/config 00:43:11.811 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:43:11.811 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:43:11.811 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:43:11.811 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:43:11.811 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:43:11.811 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:43:11.811 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:43:11.811 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:43:11.811 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:43:11.811 Removing: /var/run/dpdk/spdk1/hugepage_info 00:43:11.811 Removing: /var/run/dpdk/spdk2/config 00:43:11.811 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:43:11.811 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:43:11.811 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:43:11.811 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:43:11.811 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:43:11.811 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:43:11.811 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:43:11.811 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:43:11.811 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:43:11.811 Removing: /var/run/dpdk/spdk2/hugepage_info 00:43:11.811 Removing: /var/run/dpdk/spdk3/config 00:43:11.811 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:43:11.811 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:43:11.811 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:43:11.811 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:43:11.811 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:43:11.811 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:43:11.811 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:43:11.811 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:43:11.811 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:43:11.811 Removing: /var/run/dpdk/spdk3/hugepage_info 00:43:11.811 Removing: /var/run/dpdk/spdk4/config 00:43:11.811 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:43:11.811 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:43:11.811 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:43:11.811 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:43:11.811 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:43:11.811 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:43:11.811 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:43:11.811 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:43:11.811 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:43:11.811 Removing: /var/run/dpdk/spdk4/hugepage_info 00:43:11.811 Removing: /dev/shm/bdev_svc_trace.1 00:43:11.812 Removing: /dev/shm/nvmf_trace.0 00:43:11.812 Removing: /dev/shm/spdk_tgt_trace.pid592496 00:43:11.812 Removing: /var/run/dpdk/spdk0 00:43:11.812 Removing: /var/run/dpdk/spdk1 00:43:11.812 Removing: /var/run/dpdk/spdk2 00:43:11.812 Removing: /var/run/dpdk/spdk3 00:43:11.812 Removing: /var/run/dpdk/spdk4 00:43:11.812 Removing: /var/run/dpdk/spdk_pid590937 00:43:11.812 Removing: /var/run/dpdk/spdk_pid591675 00:43:11.812 Removing: /var/run/dpdk/spdk_pid592496 00:43:11.812 Removing: /var/run/dpdk/spdk_pid592943 00:43:11.812 Removing: /var/run/dpdk/spdk_pid593636 00:43:11.812 Removing: /var/run/dpdk/spdk_pid593776 00:43:11.812 Removing: /var/run/dpdk/spdk_pid594487 00:43:11.812 Removing: /var/run/dpdk/spdk_pid594504 00:43:11.812 Removing: /var/run/dpdk/spdk_pid594764 00:43:11.812 Removing: /var/run/dpdk/spdk_pid596080 00:43:11.812 Removing: /var/run/dpdk/spdk_pid597003 00:43:11.812 Removing: /var/run/dpdk/spdk_pid597199 00:43:11.812 Removing: /var/run/dpdk/spdk_pid597399 00:43:11.812 Removing: /var/run/dpdk/spdk_pid597632 00:43:11.812 Removing: /var/run/dpdk/spdk_pid597873 00:43:11.812 Removing: /var/run/dpdk/spdk_pid598080 00:43:11.812 Removing: /var/run/dpdk/spdk_pid598238 00:43:11.812 Removing: /var/run/dpdk/spdk_pid598426 00:43:11.812 Removing: /var/run/dpdk/spdk_pid598742 00:43:11.812 Removing: /var/run/dpdk/spdk_pid601235 00:43:11.812 Removing: /var/run/dpdk/spdk_pid601398 00:43:11.812 Removing: /var/run/dpdk/spdk_pid601560 00:43:11.812 Removing: /var/run/dpdk/spdk_pid601573 00:43:11.812 Removing: /var/run/dpdk/spdk_pid601872 00:43:11.812 Removing: /var/run/dpdk/spdk_pid602001 00:43:11.812 Removing: /var/run/dpdk/spdk_pid602311 00:43:11.812 Removing: /var/run/dpdk/spdk_pid602441 00:43:11.812 Removing: /var/run/dpdk/spdk_pid602603 00:43:11.812 Removing: /var/run/dpdk/spdk_pid602619 00:43:11.812 Removing: /var/run/dpdk/spdk_pid602889 00:43:11.812 Removing: /var/run/dpdk/spdk_pid602908 00:43:11.812 Removing: /var/run/dpdk/spdk_pid603287 00:43:11.812 Removing: /var/run/dpdk/spdk_pid603453 00:43:11.812 Removing: /var/run/dpdk/spdk_pid603769 00:43:11.812 Removing: /var/run/dpdk/spdk_pid605879 00:43:11.812 Removing: /var/run/dpdk/spdk_pid608512 00:43:11.812 Removing: /var/run/dpdk/spdk_pid616155 00:43:11.812 Removing: /var/run/dpdk/spdk_pid616637 00:43:11.812 Removing: /var/run/dpdk/spdk_pid619086 00:43:11.812 Removing: /var/run/dpdk/spdk_pid619363 00:43:11.812 Removing: /var/run/dpdk/spdk_pid621899 00:43:11.812 Removing: /var/run/dpdk/spdk_pid625735 00:43:11.812 Removing: /var/run/dpdk/spdk_pid627809 00:43:11.812 Removing: /var/run/dpdk/spdk_pid634236 00:43:11.812 Removing: /var/run/dpdk/spdk_pid639514 00:43:11.812 Removing: /var/run/dpdk/spdk_pid640798 00:43:12.071 Removing: /var/run/dpdk/spdk_pid641464 00:43:12.071 Removing: /var/run/dpdk/spdk_pid652469 00:43:12.071 Removing: /var/run/dpdk/spdk_pid654754 00:43:12.071 Removing: /var/run/dpdk/spdk_pid710102 00:43:12.071 Removing: /var/run/dpdk/spdk_pid713591 00:43:12.071 Removing: /var/run/dpdk/spdk_pid717420 00:43:12.071 Removing: /var/run/dpdk/spdk_pid721698 00:43:12.071 Removing: /var/run/dpdk/spdk_pid721704 00:43:12.071 Removing: /var/run/dpdk/spdk_pid722358 00:43:12.071 Removing: /var/run/dpdk/spdk_pid723011 00:43:12.071 Removing: /var/run/dpdk/spdk_pid723549 00:43:12.071 Removing: /var/run/dpdk/spdk_pid723950 00:43:12.071 Removing: /var/run/dpdk/spdk_pid724073 00:43:12.071 Removing: /var/run/dpdk/spdk_pid724213 00:43:12.071 Removing: /var/run/dpdk/spdk_pid724351 00:43:12.071 Removing: /var/run/dpdk/spdk_pid724354 00:43:12.071 Removing: /var/run/dpdk/spdk_pid725010 00:43:12.071 Removing: /var/run/dpdk/spdk_pid725659 00:43:12.071 Removing: /var/run/dpdk/spdk_pid726205 00:43:12.071 Removing: /var/run/dpdk/spdk_pid726600 00:43:12.071 Removing: /var/run/dpdk/spdk_pid726714 00:43:12.071 Removing: /var/run/dpdk/spdk_pid726868 00:43:12.071 Removing: /var/run/dpdk/spdk_pid727757 00:43:12.071 Removing: /var/run/dpdk/spdk_pid728485 00:43:12.071 Removing: /var/run/dpdk/spdk_pid733820 00:43:12.071 Removing: /var/run/dpdk/spdk_pid762071 00:43:12.071 Removing: /var/run/dpdk/spdk_pid765264 00:43:12.071 Removing: /var/run/dpdk/spdk_pid766439 00:43:12.071 Removing: /var/run/dpdk/spdk_pid767760 00:43:12.071 Removing: /var/run/dpdk/spdk_pid767899 00:43:12.071 Removing: /var/run/dpdk/spdk_pid768047 00:43:12.071 Removing: /var/run/dpdk/spdk_pid768188 00:43:12.071 Removing: /var/run/dpdk/spdk_pid768625 00:43:12.071 Removing: /var/run/dpdk/spdk_pid769941 00:43:12.071 Removing: /var/run/dpdk/spdk_pid770674 00:43:12.071 Removing: /var/run/dpdk/spdk_pid771110 00:43:12.071 Removing: /var/run/dpdk/spdk_pid772709 00:43:12.071 Removing: /var/run/dpdk/spdk_pid773019 00:43:12.071 Removing: /var/run/dpdk/spdk_pid773575 00:43:12.071 Removing: /var/run/dpdk/spdk_pid775966 00:43:12.071 Removing: /var/run/dpdk/spdk_pid779251 00:43:12.071 Removing: /var/run/dpdk/spdk_pid779252 00:43:12.071 Removing: /var/run/dpdk/spdk_pid779253 00:43:12.071 Removing: /var/run/dpdk/spdk_pid781470 00:43:12.072 Removing: /var/run/dpdk/spdk_pid783670 00:43:12.072 Removing: /var/run/dpdk/spdk_pid787197 00:43:12.072 Removing: /var/run/dpdk/spdk_pid810285 00:43:12.072 Removing: /var/run/dpdk/spdk_pid813062 00:43:12.072 Removing: /var/run/dpdk/spdk_pid816847 00:43:12.072 Removing: /var/run/dpdk/spdk_pid817795 00:43:12.072 Removing: /var/run/dpdk/spdk_pid818873 00:43:12.072 Removing: /var/run/dpdk/spdk_pid819958 00:43:12.072 Removing: /var/run/dpdk/spdk_pid823244 00:43:12.072 Removing: /var/run/dpdk/spdk_pid825804 00:43:12.072 Removing: /var/run/dpdk/spdk_pid828137 00:43:12.072 Removing: /var/run/dpdk/spdk_pid832401 00:43:12.072 Removing: /var/run/dpdk/spdk_pid832405 00:43:12.072 Removing: /var/run/dpdk/spdk_pid835300 00:43:12.072 Removing: /var/run/dpdk/spdk_pid835440 00:43:12.072 Removing: /var/run/dpdk/spdk_pid835580 00:43:12.072 Removing: /var/run/dpdk/spdk_pid835962 00:43:12.072 Removing: /var/run/dpdk/spdk_pid835971 00:43:12.072 Removing: /var/run/dpdk/spdk_pid837045 00:43:12.072 Removing: /var/run/dpdk/spdk_pid838220 00:43:12.072 Removing: /var/run/dpdk/spdk_pid839420 00:43:12.072 Removing: /var/run/dpdk/spdk_pid840661 00:43:12.072 Removing: /var/run/dpdk/spdk_pid841884 00:43:12.072 Removing: /var/run/dpdk/spdk_pid843069 00:43:12.072 Removing: /var/run/dpdk/spdk_pid846887 00:43:12.072 Removing: /var/run/dpdk/spdk_pid847217 00:43:12.072 Removing: /var/run/dpdk/spdk_pid848580 00:43:12.072 Removing: /var/run/dpdk/spdk_pid849350 00:43:12.072 Removing: /var/run/dpdk/spdk_pid853190 00:43:12.072 Removing: /var/run/dpdk/spdk_pid855663 00:43:12.072 Removing: /var/run/dpdk/spdk_pid859080 00:43:12.072 Removing: /var/run/dpdk/spdk_pid862541 00:43:12.072 Removing: /var/run/dpdk/spdk_pid869028 00:43:12.072 Removing: /var/run/dpdk/spdk_pid873388 00:43:12.072 Removing: /var/run/dpdk/spdk_pid873399 00:43:12.072 Removing: /var/run/dpdk/spdk_pid885862 00:43:12.072 Removing: /var/run/dpdk/spdk_pid886388 00:43:12.072 Removing: /var/run/dpdk/spdk_pid886910 00:43:12.072 Removing: /var/run/dpdk/spdk_pid887320 00:43:12.072 Removing: /var/run/dpdk/spdk_pid888404 00:43:12.072 Removing: /var/run/dpdk/spdk_pid888811 00:43:12.072 Removing: /var/run/dpdk/spdk_pid889333 00:43:12.072 Removing: /var/run/dpdk/spdk_pid889743 00:43:12.072 Removing: /var/run/dpdk/spdk_pid892167 00:43:12.072 Removing: /var/run/dpdk/spdk_pid892386 00:43:12.072 Removing: /var/run/dpdk/spdk_pid896183 00:43:12.072 Removing: /var/run/dpdk/spdk_pid896242 00:43:12.072 Removing: /var/run/dpdk/spdk_pid899589 00:43:12.072 Removing: /var/run/dpdk/spdk_pid902203 00:43:12.072 Removing: /var/run/dpdk/spdk_pid909121 00:43:12.072 Removing: /var/run/dpdk/spdk_pid909522 00:43:12.072 Removing: /var/run/dpdk/spdk_pid912019 00:43:12.072 Removing: /var/run/dpdk/spdk_pid912242 00:43:12.072 Removing: /var/run/dpdk/spdk_pid914789 00:43:12.072 Removing: /var/run/dpdk/spdk_pid918476 00:43:12.072 Removing: /var/run/dpdk/spdk_pid920741 00:43:12.072 Removing: /var/run/dpdk/spdk_pid927511 00:43:12.072 Removing: /var/run/dpdk/spdk_pid932833 00:43:12.072 Removing: /var/run/dpdk/spdk_pid934008 00:43:12.072 Removing: /var/run/dpdk/spdk_pid934671 00:43:12.072 Removing: /var/run/dpdk/spdk_pid944837 00:43:12.072 Removing: /var/run/dpdk/spdk_pid947081 00:43:12.072 Removing: /var/run/dpdk/spdk_pid949083 00:43:12.072 Removing: /var/run/dpdk/spdk_pid954006 00:43:12.072 Removing: /var/run/dpdk/spdk_pid954126 00:43:12.072 Removing: /var/run/dpdk/spdk_pid956949 00:43:12.072 Removing: /var/run/dpdk/spdk_pid958921 00:43:12.072 Removing: /var/run/dpdk/spdk_pid960335 00:43:12.072 Removing: /var/run/dpdk/spdk_pid961185 00:43:12.072 Removing: /var/run/dpdk/spdk_pid962581 00:43:12.072 Removing: /var/run/dpdk/spdk_pid963334 00:43:12.072 Removing: /var/run/dpdk/spdk_pid968635 00:43:12.072 Removing: /var/run/dpdk/spdk_pid969009 00:43:12.072 Removing: /var/run/dpdk/spdk_pid969398 00:43:12.072 Removing: /var/run/dpdk/spdk_pid970965 00:43:12.072 Removing: /var/run/dpdk/spdk_pid971357 00:43:12.072 Removing: /var/run/dpdk/spdk_pid971636 00:43:12.072 Removing: /var/run/dpdk/spdk_pid974081 00:43:12.072 Removing: /var/run/dpdk/spdk_pid974093 00:43:12.072 Removing: /var/run/dpdk/spdk_pid975572 00:43:12.072 Removing: /var/run/dpdk/spdk_pid976039 00:43:12.072 Removing: /var/run/dpdk/spdk_pid976046 00:43:12.072 Clean 00:43:12.331 08:17:05 -- common/autotest_common.sh@1453 -- # return 0 00:43:12.331 08:17:05 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:43:12.331 08:17:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:12.331 08:17:05 -- common/autotest_common.sh@10 -- # set +x 00:43:12.331 08:17:05 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:43:12.331 08:17:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:12.331 08:17:05 -- common/autotest_common.sh@10 -- # set +x 00:43:12.331 08:17:05 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:12.331 08:17:05 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:43:12.331 08:17:05 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:43:12.331 08:17:05 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:43:12.331 08:17:05 -- spdk/autotest.sh@398 -- # hostname 00:43:12.331 08:17:05 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:43:12.589 geninfo: WARNING: invalid characters removed from testname! 00:43:44.673 08:17:36 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:47.200 08:17:40 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:50.495 08:17:43 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:53.776 08:17:46 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:56.305 08:17:49 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:59.596 08:17:52 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:02.122 08:17:55 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:02.122 08:17:55 -- spdk/autorun.sh@1 -- $ timing_finish 00:44:02.122 08:17:55 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:44:02.122 08:17:55 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:44:02.122 08:17:55 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:44:02.122 08:17:55 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:02.382 + [[ -n 498710 ]] 00:44:02.382 + sudo kill 498710 00:44:02.393 [Pipeline] } 00:44:02.408 [Pipeline] // stage 00:44:02.413 [Pipeline] } 00:44:02.427 [Pipeline] // timeout 00:44:02.432 [Pipeline] } 00:44:02.446 [Pipeline] // catchError 00:44:02.451 [Pipeline] } 00:44:02.466 [Pipeline] // wrap 00:44:02.472 [Pipeline] } 00:44:02.485 [Pipeline] // catchError 00:44:02.494 [Pipeline] stage 00:44:02.496 [Pipeline] { (Epilogue) 00:44:02.509 [Pipeline] catchError 00:44:02.511 [Pipeline] { 00:44:02.524 [Pipeline] echo 00:44:02.526 Cleanup processes 00:44:02.532 [Pipeline] sh 00:44:02.822 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:02.822 988315 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:02.837 [Pipeline] sh 00:44:03.123 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:03.123 ++ grep -v 'sudo pgrep' 00:44:03.123 ++ awk '{print $1}' 00:44:03.123 + sudo kill -9 00:44:03.123 + true 00:44:03.136 [Pipeline] sh 00:44:03.456 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:15.723 [Pipeline] sh 00:44:16.011 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:16.012 Artifacts sizes are good 00:44:16.029 [Pipeline] archiveArtifacts 00:44:16.038 Archiving artifacts 00:44:16.215 [Pipeline] sh 00:44:16.513 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:16.527 [Pipeline] cleanWs 00:44:16.536 [WS-CLEANUP] Deleting project workspace... 00:44:16.536 [WS-CLEANUP] Deferred wipeout is used... 00:44:16.543 [WS-CLEANUP] done 00:44:16.545 [Pipeline] } 00:44:16.561 [Pipeline] // catchError 00:44:16.571 [Pipeline] sh 00:44:16.852 + logger -p user.info -t JENKINS-CI 00:44:16.860 [Pipeline] } 00:44:16.873 [Pipeline] // stage 00:44:16.878 [Pipeline] } 00:44:16.891 [Pipeline] // node 00:44:16.895 [Pipeline] End of Pipeline 00:44:16.929 Finished: SUCCESS